Jan 27 13:06:55 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 13:06:55 crc restorecon[4697]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 13:06:55 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:06:56 crc restorecon[4697]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 13:06:56 crc restorecon[4697]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 13:06:57 crc kubenswrapper[4786]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 13:06:57 crc kubenswrapper[4786]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 13:06:57 crc kubenswrapper[4786]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 13:06:57 crc kubenswrapper[4786]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 13:06:57 crc kubenswrapper[4786]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 13:06:57 crc kubenswrapper[4786]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.214717 4786 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224098 4786 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224136 4786 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224143 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224148 4786 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224152 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224157 4786 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224161 4786 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224166 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224170 4786 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224175 4786 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224180 4786 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224185 4786 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224189 4786 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224194 4786 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224200 4786 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224207 4786 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224213 4786 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224218 4786 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224224 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224231 4786 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224237 4786 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224242 4786 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224247 4786 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224251 4786 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224256 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224263 4786 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224268 4786 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224273 4786 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224278 4786 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224282 4786 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224286 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224290 4786 feature_gate.go:330] unrecognized feature gate: Example Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224295 4786 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224300 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224305 4786 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224309 4786 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224314 4786 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224318 4786 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224323 4786 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224327 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224331 4786 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224336 4786 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224340 4786 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224346 4786 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224351 4786 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224355 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224360 4786 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224364 4786 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224370 4786 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224376 4786 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224383 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224389 4786 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224394 4786 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224398 4786 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224403 4786 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224408 4786 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224413 4786 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224417 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224421 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224427 4786 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224433 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224437 4786 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224442 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224447 4786 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224451 4786 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224456 4786 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224460 4786 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224466 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224471 4786 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224475 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.224479 4786 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224591 4786 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224622 4786 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224633 4786 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224640 4786 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224649 4786 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224655 4786 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224663 4786 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224670 4786 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224675 4786 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224682 4786 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224688 4786 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224694 4786 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224701 4786 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224706 4786 flags.go:64] FLAG: --cgroup-root="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224712 4786 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224717 4786 flags.go:64] FLAG: --client-ca-file="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224723 4786 flags.go:64] FLAG: --cloud-config="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224729 4786 flags.go:64] FLAG: --cloud-provider="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224734 4786 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224741 4786 flags.go:64] FLAG: --cluster-domain="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224747 4786 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224752 4786 flags.go:64] FLAG: --config-dir="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224758 4786 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224764 4786 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224772 4786 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224778 4786 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224783 4786 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224789 4786 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224795 4786 flags.go:64] FLAG: --contention-profiling="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224800 4786 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224806 4786 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224811 4786 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224816 4786 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224823 4786 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224829 4786 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224835 4786 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224840 4786 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224847 4786 flags.go:64] FLAG: --enable-server="true" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224852 4786 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224860 4786 flags.go:64] FLAG: --event-burst="100" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224866 4786 flags.go:64] FLAG: --event-qps="50" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224871 4786 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224877 4786 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224882 4786 flags.go:64] FLAG: --eviction-hard="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224892 4786 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224897 4786 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224902 4786 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224908 4786 flags.go:64] FLAG: --eviction-soft="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224913 4786 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224918 4786 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224923 4786 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224928 4786 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224934 4786 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224939 4786 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224944 4786 flags.go:64] FLAG: --feature-gates="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224950 4786 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224956 4786 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224961 4786 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224966 4786 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224972 4786 flags.go:64] FLAG: --healthz-port="10248" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224977 4786 flags.go:64] FLAG: --help="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224982 4786 flags.go:64] FLAG: --hostname-override="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224988 4786 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224993 4786 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.224998 4786 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225003 4786 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225008 4786 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225013 4786 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225017 4786 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225023 4786 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225028 4786 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225033 4786 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225038 4786 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225044 4786 flags.go:64] FLAG: --kube-reserved="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225049 4786 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225053 4786 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225059 4786 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225064 4786 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225069 4786 flags.go:64] FLAG: --lock-file="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225073 4786 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225078 4786 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225083 4786 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225091 4786 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225096 4786 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225101 4786 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225105 4786 flags.go:64] FLAG: --logging-format="text" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225111 4786 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225116 4786 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225121 4786 flags.go:64] FLAG: --manifest-url="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225126 4786 flags.go:64] FLAG: --manifest-url-header="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225132 4786 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225138 4786 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225145 4786 flags.go:64] FLAG: --max-pods="110" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225150 4786 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225157 4786 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225162 4786 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225167 4786 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225173 4786 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225178 4786 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225183 4786 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225198 4786 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225203 4786 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225208 4786 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225213 4786 flags.go:64] FLAG: --pod-cidr="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225218 4786 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225227 4786 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225232 4786 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225237 4786 flags.go:64] FLAG: --pods-per-core="0" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225242 4786 flags.go:64] FLAG: --port="10250" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225249 4786 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225254 4786 flags.go:64] FLAG: --provider-id="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225259 4786 flags.go:64] FLAG: --qos-reserved="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225264 4786 flags.go:64] FLAG: --read-only-port="10255" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225269 4786 flags.go:64] FLAG: --register-node="true" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225274 4786 flags.go:64] FLAG: --register-schedulable="true" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225279 4786 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225287 4786 flags.go:64] FLAG: --registry-burst="10" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225292 4786 flags.go:64] FLAG: --registry-qps="5" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225297 4786 flags.go:64] FLAG: --reserved-cpus="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225301 4786 flags.go:64] FLAG: --reserved-memory="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225307 4786 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225313 4786 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225317 4786 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225322 4786 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225327 4786 flags.go:64] FLAG: --runonce="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225332 4786 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225337 4786 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225343 4786 flags.go:64] FLAG: --seccomp-default="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225348 4786 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225353 4786 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225358 4786 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225363 4786 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225368 4786 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225372 4786 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225378 4786 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225382 4786 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225387 4786 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225392 4786 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225396 4786 flags.go:64] FLAG: --system-cgroups="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225401 4786 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225410 4786 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225416 4786 flags.go:64] FLAG: --tls-cert-file="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225422 4786 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225428 4786 flags.go:64] FLAG: --tls-min-version="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225434 4786 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225440 4786 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225446 4786 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225451 4786 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225457 4786 flags.go:64] FLAG: --v="2" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225469 4786 flags.go:64] FLAG: --version="false" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225476 4786 flags.go:64] FLAG: --vmodule="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225483 4786 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225489 4786 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225641 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225648 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225654 4786 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225658 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225663 4786 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225667 4786 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225671 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225676 4786 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225679 4786 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225684 4786 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225688 4786 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225692 4786 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225696 4786 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225700 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225704 4786 feature_gate.go:330] unrecognized feature gate: Example Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225708 4786 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225712 4786 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225716 4786 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225721 4786 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225724 4786 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225737 4786 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225741 4786 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225745 4786 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225749 4786 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225753 4786 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225757 4786 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225762 4786 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225766 4786 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225771 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225775 4786 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225779 4786 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225783 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225787 4786 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225791 4786 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225798 4786 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225801 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225806 4786 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225810 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225814 4786 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225818 4786 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225825 4786 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225829 4786 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225833 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225837 4786 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225841 4786 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225844 4786 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225848 4786 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225854 4786 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225860 4786 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225865 4786 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225869 4786 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225874 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225881 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225885 4786 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225889 4786 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225893 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225897 4786 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225901 4786 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225906 4786 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225911 4786 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225916 4786 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225921 4786 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225927 4786 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225932 4786 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225938 4786 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225942 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225949 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225953 4786 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225957 4786 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225961 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.225966 4786 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.225983 4786 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.235971 4786 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.235998 4786 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236082 4786 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236090 4786 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236094 4786 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236098 4786 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236102 4786 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236105 4786 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236109 4786 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236112 4786 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236117 4786 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236122 4786 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236126 4786 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236130 4786 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236133 4786 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236137 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236140 4786 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236144 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236154 4786 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236157 4786 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236161 4786 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236170 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236174 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236177 4786 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236181 4786 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236184 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236188 4786 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236193 4786 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236198 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236202 4786 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236206 4786 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236210 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236215 4786 feature_gate.go:330] unrecognized feature gate: Example Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236219 4786 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236222 4786 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236226 4786 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236229 4786 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236232 4786 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236236 4786 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236239 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236243 4786 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236247 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236251 4786 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236256 4786 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236260 4786 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236264 4786 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236267 4786 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236271 4786 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236275 4786 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236279 4786 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236282 4786 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236286 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236289 4786 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236294 4786 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236298 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236302 4786 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236306 4786 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236317 4786 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236321 4786 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236324 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236328 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236331 4786 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236335 4786 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236339 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236344 4786 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236347 4786 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236351 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236354 4786 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236358 4786 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236362 4786 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236365 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236369 4786 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236372 4786 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.236380 4786 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236520 4786 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236526 4786 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236530 4786 feature_gate.go:330] unrecognized feature gate: Example Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236534 4786 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236538 4786 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236542 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236546 4786 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236549 4786 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236553 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236557 4786 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236561 4786 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236565 4786 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236569 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236573 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236577 4786 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236580 4786 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236584 4786 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236587 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236591 4786 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236615 4786 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236622 4786 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236626 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236632 4786 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236637 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236641 4786 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236644 4786 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236648 4786 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236651 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236655 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236658 4786 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236661 4786 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236665 4786 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236668 4786 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236672 4786 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236675 4786 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236679 4786 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236682 4786 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236685 4786 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236689 4786 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236692 4786 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236695 4786 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236699 4786 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236703 4786 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236706 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236709 4786 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236713 4786 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236716 4786 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236720 4786 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236723 4786 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236726 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236729 4786 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236733 4786 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236738 4786 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236742 4786 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236747 4786 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236758 4786 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236762 4786 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236767 4786 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236771 4786 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236775 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236779 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236783 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236786 4786 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236789 4786 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236794 4786 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236797 4786 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236801 4786 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236805 4786 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236808 4786 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236812 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.236816 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.236822 4786 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.238432 4786 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.242454 4786 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.243379 4786 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.245572 4786 server.go:997] "Starting client certificate rotation" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.245616 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.245974 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-16 13:01:31.581349901 +0000 UTC Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.246073 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.269377 4786 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 13:06:57 crc kubenswrapper[4786]: E0127 13:06:57.274492 4786 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.275725 4786 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.290110 4786 log.go:25] "Validated CRI v1 runtime API" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.348878 4786 log.go:25] "Validated CRI v1 image API" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.351180 4786 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.357660 4786 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-13-02-38-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.357712 4786 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.381733 4786 manager.go:217] Machine: {Timestamp:2026-01-27 13:06:57.379201584 +0000 UTC m=+0.589815743 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:56795cdc-7796-46ae-b42e-edbe6c464279 BootID:18042af8-71e3-4882-b2ca-158fe4a2012f Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:c6:b7:6e Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:c6:b7:6e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:7c:8c:b1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:23:57:ad Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:57:1f:43 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:bc:dd:e0 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ee:54:88:45:41:24 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2e:84:6b:7a:35:23 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.382078 4786 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.382244 4786 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.382565 4786 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.382787 4786 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.382834 4786 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.383170 4786 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.383188 4786 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.383793 4786 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.383832 4786 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.384112 4786 state_mem.go:36] "Initialized new in-memory state store" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.384217 4786 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.389759 4786 kubelet.go:418] "Attempting to sync node with API server" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.389812 4786 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.389833 4786 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.389846 4786 kubelet.go:324] "Adding apiserver pod source" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.389858 4786 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.400513 4786 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.400700 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Jan 27 13:06:57 crc kubenswrapper[4786]: E0127 13:06:57.400804 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.400703 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Jan 27 13:06:57 crc kubenswrapper[4786]: E0127 13:06:57.400856 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.402176 4786 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.404005 4786 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.407556 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.407669 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.407723 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.407773 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.407827 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.407873 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.407917 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.407969 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.408017 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.408064 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.408124 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.408172 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.408239 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.408705 4786 server.go:1280] "Started kubelet" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.409002 4786 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.408955 4786 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.409647 4786 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.409738 4786 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 13:06:57 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.411729 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.411781 4786 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.411968 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 20:32:21.974669807 +0000 UTC Jan 27 13:06:57 crc kubenswrapper[4786]: E0127 13:06:57.411980 4786 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.412036 4786 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.412042 4786 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.412115 4786 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 13:06:57 crc kubenswrapper[4786]: E0127 13:06:57.412483 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="200ms" Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.412592 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Jan 27 13:06:57 crc kubenswrapper[4786]: E0127 13:06:57.412688 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.413485 4786 factory.go:153] Registering CRI-O factory Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.413505 4786 factory.go:221] Registration of the crio container factory successfully Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.413558 4786 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.413566 4786 factory.go:55] Registering systemd factory Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.413573 4786 factory.go:221] Registration of the systemd container factory successfully Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.413625 4786 factory.go:103] Registering Raw factory Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.413642 4786 manager.go:1196] Started watching for new ooms in manager Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.414160 4786 manager.go:319] Starting recovery of all containers Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.415022 4786 server.go:460] "Adding debug handlers to kubelet server" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424627 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424688 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424706 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424717 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424729 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424743 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424752 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424762 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424777 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424786 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424799 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424808 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424820 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424832 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424844 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424870 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424884 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424894 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424904 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424916 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424927 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424940 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424950 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424960 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424972 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424981 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.424996 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425009 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425023 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425033 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425076 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425088 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425100 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425110 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425121 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425134 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425143 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425156 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425166 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425175 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425187 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425196 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425206 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425219 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425231 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425244 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425257 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425268 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425279 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425289 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425302 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425311 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425328 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425342 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425356 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425371 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425381 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425396 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425406 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425420 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425429 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425440 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425451 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425460 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425472 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425481 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425492 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425504 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425514 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425524 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425536 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425547 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425563 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425578 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425590 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425628 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425644 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425659 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425670 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425680 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425692 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425702 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425714 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425723 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425732 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425744 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.425753 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.426466 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.426554 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.426574 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.426589 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.426630 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.426648 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.426669 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.426684 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.426699 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: E0127 13:06:57.420375 4786 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.5:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e985a58df0f6b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 13:06:57.408675691 +0000 UTC m=+0.619289810,LastTimestamp:2026-01-27 13:06:57.408675691 +0000 UTC m=+0.619289810,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.429139 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.429250 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.429272 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.429297 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.429311 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.429328 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.429362 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430252 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430292 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430311 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430325 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430338 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430352 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430363 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430375 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430387 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430398 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430409 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430420 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430429 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430439 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430450 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430460 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430471 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430489 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430505 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430520 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430537 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430554 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430572 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430582 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430618 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430628 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430638 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430648 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430658 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430668 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430679 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430689 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430705 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430714 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430724 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430733 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430743 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430754 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430766 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430778 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430788 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430798 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430807 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430817 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430826 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430837 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430846 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430857 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430867 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430877 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430888 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430901 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430911 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430923 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430968 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430981 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.430994 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.431008 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.431020 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.431033 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.431045 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.431054 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.431063 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.431072 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.431082 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.431091 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.431100 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.431211 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.431223 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.431237 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.435671 4786 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.435774 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.435835 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.435902 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.435964 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436018 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436074 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436127 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436187 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436260 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436318 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436372 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436439 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436496 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436549 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436616 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436672 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436725 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436776 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436835 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436887 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436939 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.436990 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.437047 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.437107 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.437162 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.437217 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.437270 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.437343 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.437401 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.437455 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.437508 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.437560 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.437624 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.437679 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.437737 4786 reconstruct.go:97] "Volume reconstruction finished" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.437783 4786 reconciler.go:26] "Reconciler: start to sync state" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.437818 4786 manager.go:324] Recovery completed Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.450281 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.452725 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.452784 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.452793 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.453507 4786 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.453586 4786 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.453676 4786 state_mem.go:36] "Initialized new in-memory state store" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.461315 4786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.463060 4786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.463303 4786 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.463560 4786 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 13:06:57 crc kubenswrapper[4786]: E0127 13:06:57.463737 4786 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.471695 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Jan 27 13:06:57 crc kubenswrapper[4786]: E0127 13:06:57.471772 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.480247 4786 policy_none.go:49] "None policy: Start" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.481363 4786 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.481465 4786 state_mem.go:35] "Initializing new in-memory state store" Jan 27 13:06:57 crc kubenswrapper[4786]: E0127 13:06:57.512280 4786 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.537380 4786 manager.go:334] "Starting Device Plugin manager" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.537696 4786 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.537720 4786 server.go:79] "Starting device plugin registration server" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.538161 4786 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.538181 4786 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.538405 4786 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.538476 4786 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.538482 4786 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 13:06:57 crc kubenswrapper[4786]: E0127 13:06:57.546287 4786 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.564705 4786 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.564856 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.566546 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.566567 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.566574 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.566705 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.567129 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.567158 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.567554 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.567597 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.567644 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.567799 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.568038 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.568143 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.569466 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.569488 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.569496 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.569494 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.569624 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.569633 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.569788 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.569891 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.569954 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.570110 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.570168 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.570194 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.571539 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.571556 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.571570 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.571748 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.571770 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.571778 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.571859 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.572045 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.572093 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.573214 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.573231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.573239 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.573336 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.573356 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.573400 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.573562 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.573897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.574046 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.574089 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.574099 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:57 crc kubenswrapper[4786]: E0127 13:06:57.613109 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="400ms" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.638655 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.639712 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.639767 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.639797 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.639821 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.639844 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.639905 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.639973 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.640010 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.640032 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.640074 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.640109 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.640139 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.640168 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.640218 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.640260 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.640796 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.640836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.640847 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.640873 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 13:06:57 crc kubenswrapper[4786]: E0127 13:06:57.641299 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.5:6443: connect: connection refused" node="crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.741818 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.741906 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.741950 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.741991 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742032 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742075 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742078 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742116 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742158 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742200 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742209 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742251 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742268 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742203 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742162 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742208 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742118 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742459 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742490 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742505 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742531 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742546 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742529 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742571 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742648 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742671 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742740 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742571 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.742683 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.743210 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.841720 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.843203 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.843279 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.843299 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.843350 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 13:06:57 crc kubenswrapper[4786]: E0127 13:06:57.844207 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.5:6443: connect: connection refused" node="crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.898288 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.906733 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.933294 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.949772 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-4ff3840f74ab5934828f01ab989a944d3b520ffee5e9bfdd41320da2650ce65f WatchSource:0}: Error finding container 4ff3840f74ab5934828f01ab989a944d3b520ffee5e9bfdd41320da2650ce65f: Status 404 returned error can't find the container with id 4ff3840f74ab5934828f01ab989a944d3b520ffee5e9bfdd41320da2650ce65f Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.950143 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-75a90d465ccaef6fe9bf55a0f0f794363f7204deef5590dc4a424c3f1deedf05 WatchSource:0}: Error finding container 75a90d465ccaef6fe9bf55a0f0f794363f7204deef5590dc4a424c3f1deedf05: Status 404 returned error can't find the container with id 75a90d465ccaef6fe9bf55a0f0f794363f7204deef5590dc4a424c3f1deedf05 Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.954054 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-d8e52975694317ce04aa110aae08df37d3bcbac9988a2bf989d59d746a8be99e WatchSource:0}: Error finding container d8e52975694317ce04aa110aae08df37d3bcbac9988a2bf989d59d746a8be99e: Status 404 returned error can't find the container with id d8e52975694317ce04aa110aae08df37d3bcbac9988a2bf989d59d746a8be99e Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.961006 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: I0127 13:06:57.966470 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.986310 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-8c0cededdf6907849f20d39a71eeb9782d5e60ad8b2bae8fd1c56a556634ebe7 WatchSource:0}: Error finding container 8c0cededdf6907849f20d39a71eeb9782d5e60ad8b2bae8fd1c56a556634ebe7: Status 404 returned error can't find the container with id 8c0cededdf6907849f20d39a71eeb9782d5e60ad8b2bae8fd1c56a556634ebe7 Jan 27 13:06:57 crc kubenswrapper[4786]: W0127 13:06:57.988867 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4e1edcc2f6701581fd9e88298f49b76205c74d545b735f13038328107db0520e WatchSource:0}: Error finding container 4e1edcc2f6701581fd9e88298f49b76205c74d545b735f13038328107db0520e: Status 404 returned error can't find the container with id 4e1edcc2f6701581fd9e88298f49b76205c74d545b735f13038328107db0520e Jan 27 13:06:58 crc kubenswrapper[4786]: E0127 13:06:58.013801 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="800ms" Jan 27 13:06:58 crc kubenswrapper[4786]: I0127 13:06:58.244527 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:58 crc kubenswrapper[4786]: I0127 13:06:58.245973 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:58 crc kubenswrapper[4786]: I0127 13:06:58.246017 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:58 crc kubenswrapper[4786]: I0127 13:06:58.246030 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:58 crc kubenswrapper[4786]: I0127 13:06:58.246062 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 13:06:58 crc kubenswrapper[4786]: E0127 13:06:58.246632 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.5:6443: connect: connection refused" node="crc" Jan 27 13:06:58 crc kubenswrapper[4786]: W0127 13:06:58.279567 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Jan 27 13:06:58 crc kubenswrapper[4786]: E0127 13:06:58.279738 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:06:58 crc kubenswrapper[4786]: I0127 13:06:58.410834 4786 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Jan 27 13:06:58 crc kubenswrapper[4786]: I0127 13:06:58.413000 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 10:46:53.143091308 +0000 UTC Jan 27 13:06:58 crc kubenswrapper[4786]: I0127 13:06:58.467119 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"75a90d465ccaef6fe9bf55a0f0f794363f7204deef5590dc4a424c3f1deedf05"} Jan 27 13:06:58 crc kubenswrapper[4786]: I0127 13:06:58.469967 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4ff3840f74ab5934828f01ab989a944d3b520ffee5e9bfdd41320da2650ce65f"} Jan 27 13:06:58 crc kubenswrapper[4786]: I0127 13:06:58.470812 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4e1edcc2f6701581fd9e88298f49b76205c74d545b735f13038328107db0520e"} Jan 27 13:06:58 crc kubenswrapper[4786]: I0127 13:06:58.471668 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8c0cededdf6907849f20d39a71eeb9782d5e60ad8b2bae8fd1c56a556634ebe7"} Jan 27 13:06:58 crc kubenswrapper[4786]: I0127 13:06:58.472420 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d8e52975694317ce04aa110aae08df37d3bcbac9988a2bf989d59d746a8be99e"} Jan 27 13:06:58 crc kubenswrapper[4786]: W0127 13:06:58.730640 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Jan 27 13:06:58 crc kubenswrapper[4786]: E0127 13:06:58.730822 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:06:58 crc kubenswrapper[4786]: E0127 13:06:58.814476 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="1.6s" Jan 27 13:06:58 crc kubenswrapper[4786]: W0127 13:06:58.839828 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Jan 27 13:06:58 crc kubenswrapper[4786]: E0127 13:06:58.839911 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:06:58 crc kubenswrapper[4786]: W0127 13:06:58.906928 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Jan 27 13:06:58 crc kubenswrapper[4786]: E0127 13:06:58.907003 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.047626 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.048791 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.048822 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.048832 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.048854 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 13:06:59 crc kubenswrapper[4786]: E0127 13:06:59.049346 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.5:6443: connect: connection refused" node="crc" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.330209 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 13:06:59 crc kubenswrapper[4786]: E0127 13:06:59.331764 4786 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.410751 4786 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.413847 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:09:53.676261562 +0000 UTC Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.476479 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0"} Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.476517 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f"} Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.476528 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.476530 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08"} Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.476821 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1"} Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.477315 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.477363 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.477377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.478892 4786 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5" exitCode=0 Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.478939 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5"} Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.478984 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.479912 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.479935 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.479946 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.481094 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f" exitCode=0 Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.481137 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f"} Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.481231 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.482031 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.482059 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.482071 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.482708 4786 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea" exitCode=0 Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.482799 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea"} Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.482887 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.483673 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.483701 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.483709 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.483748 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.484440 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.484474 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.484488 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.485084 4786 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="7122d0fcd9cf517570a47584791413235924c7c7192a487896c93eccfd5d8d4f" exitCode=0 Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.485132 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"7122d0fcd9cf517570a47584791413235924c7c7192a487896c93eccfd5d8d4f"} Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.485139 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.485860 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.485885 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.485893 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.532415 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.533252 4786 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" start-of-body= Jan 27 13:06:59 crc kubenswrapper[4786]: I0127 13:06:59.533321 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.063245 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.410762 4786 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.413988 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:56:51.198289168 +0000 UTC Jan 27 13:07:00 crc kubenswrapper[4786]: E0127 13:07:00.415592 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="3.2s" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.492654 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"823e47a91a33d234f34bb3e1529d300d469cab2bad4630ec73c30b9d669abe06"} Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.492859 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"19f3b3f74451aceef575e0385356b071170bdde56171fce36e1bdd5debdac37c"} Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.492883 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"393d243fd50fbbc25df6261d62f7e4ddd7aee425cef5c4ccc0d8ff028cd34df5"} Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.493108 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.494305 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.494351 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.494387 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.496365 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"54c929c23ed1b0a4edb128dcd3e034412d0ebc29434b3da4ad191337dfb776e3"} Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.496569 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.497638 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.497677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.497697 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.498595 4786 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117" exitCode=0 Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.498708 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117"} Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.498808 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.500068 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.500204 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.500301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.503949 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.504572 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6"} Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.504728 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118"} Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.504817 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead"} Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.504899 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262"} Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.505252 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.505304 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.505323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:00 crc kubenswrapper[4786]: W0127 13:07:00.553560 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Jan 27 13:07:00 crc kubenswrapper[4786]: E0127 13:07:00.553661 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.650465 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.653020 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.653073 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.653089 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:00 crc kubenswrapper[4786]: I0127 13:07:00.653118 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 13:07:00 crc kubenswrapper[4786]: E0127 13:07:00.653635 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.5:6443: connect: connection refused" node="crc" Jan 27 13:07:00 crc kubenswrapper[4786]: W0127 13:07:00.708271 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Jan 27 13:07:00 crc kubenswrapper[4786]: E0127 13:07:00.708540 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:07:01 crc kubenswrapper[4786]: W0127 13:07:01.228436 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Jan 27 13:07:01 crc kubenswrapper[4786]: E0127 13:07:01.228524 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.414140 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 03:41:04.820431003 +0000 UTC Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.508675 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30"} Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.508741 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.509665 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.509700 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.509708 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.510515 4786 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128" exitCode=0 Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.510597 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.510631 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128"} Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.510676 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.510694 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.510732 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.510751 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.511887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.511897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.511913 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.511923 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.511922 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.512027 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.511888 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.511897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.512125 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.512138 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.512141 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.512152 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:01 crc kubenswrapper[4786]: I0127 13:07:01.722259 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.165994 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.174243 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.414597 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:10:57.522184266 +0000 UTC Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.521709 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254"} Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.521984 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.521986 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50"} Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.522126 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f"} Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.522160 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6"} Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.521823 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.522006 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.523166 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6"} Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.523498 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.523533 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.523547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.523890 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.523936 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.523956 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.527705 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.527735 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:02 crc kubenswrapper[4786]: I0127 13:07:02.527754 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.308768 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.409704 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.414995 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 16:32:06.47265486 +0000 UTC Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.524384 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.524442 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.524398 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.525738 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.525782 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.525797 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.525844 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.525867 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.525871 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.525901 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.525876 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.525919 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.853804 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.855584 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.855774 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.855802 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:03 crc kubenswrapper[4786]: I0127 13:07:03.855861 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 13:07:04 crc kubenswrapper[4786]: I0127 13:07:04.416415 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:59:48.097034325 +0000 UTC Jan 27 13:07:04 crc kubenswrapper[4786]: I0127 13:07:04.527498 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:04 crc kubenswrapper[4786]: I0127 13:07:04.528922 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:04 crc kubenswrapper[4786]: I0127 13:07:04.528961 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:04 crc kubenswrapper[4786]: I0127 13:07:04.528971 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:04 crc kubenswrapper[4786]: I0127 13:07:04.855314 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:07:04 crc kubenswrapper[4786]: I0127 13:07:04.855551 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:04 crc kubenswrapper[4786]: I0127 13:07:04.857063 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:04 crc kubenswrapper[4786]: I0127 13:07:04.857103 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:04 crc kubenswrapper[4786]: I0127 13:07:04.857115 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:05 crc kubenswrapper[4786]: I0127 13:07:05.317484 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 13:07:05 crc kubenswrapper[4786]: I0127 13:07:05.317756 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:05 crc kubenswrapper[4786]: I0127 13:07:05.319463 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:05 crc kubenswrapper[4786]: I0127 13:07:05.319507 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:05 crc kubenswrapper[4786]: I0127 13:07:05.319520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:05 crc kubenswrapper[4786]: I0127 13:07:05.364255 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:07:05 crc kubenswrapper[4786]: I0127 13:07:05.364519 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:05 crc kubenswrapper[4786]: I0127 13:07:05.366201 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:05 crc kubenswrapper[4786]: I0127 13:07:05.366252 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:05 crc kubenswrapper[4786]: I0127 13:07:05.366272 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:05 crc kubenswrapper[4786]: I0127 13:07:05.417504 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:14:39.947395463 +0000 UTC Jan 27 13:07:06 crc kubenswrapper[4786]: I0127 13:07:06.090385 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:07:06 crc kubenswrapper[4786]: I0127 13:07:06.090556 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:06 crc kubenswrapper[4786]: I0127 13:07:06.091917 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:06 crc kubenswrapper[4786]: I0127 13:07:06.092015 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:06 crc kubenswrapper[4786]: I0127 13:07:06.092038 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:06 crc kubenswrapper[4786]: I0127 13:07:06.418461 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:39:04.74451233 +0000 UTC Jan 27 13:07:07 crc kubenswrapper[4786]: I0127 13:07:07.419437 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 03:40:57.659545788 +0000 UTC Jan 27 13:07:07 crc kubenswrapper[4786]: E0127 13:07:07.546418 4786 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 13:07:08 crc kubenswrapper[4786]: I0127 13:07:08.419847 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 19:12:49.836294661 +0000 UTC Jan 27 13:07:09 crc kubenswrapper[4786]: I0127 13:07:09.420795 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:40:14.938626383 +0000 UTC Jan 27 13:07:10 crc kubenswrapper[4786]: I0127 13:07:10.069816 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:07:10 crc kubenswrapper[4786]: I0127 13:07:10.070035 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:10 crc kubenswrapper[4786]: I0127 13:07:10.071487 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:10 crc kubenswrapper[4786]: I0127 13:07:10.071547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:10 crc kubenswrapper[4786]: I0127 13:07:10.071568 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:10 crc kubenswrapper[4786]: I0127 13:07:10.104513 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 13:07:10 crc kubenswrapper[4786]: I0127 13:07:10.104938 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:10 crc kubenswrapper[4786]: I0127 13:07:10.105992 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:10 crc kubenswrapper[4786]: I0127 13:07:10.106099 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:10 crc kubenswrapper[4786]: I0127 13:07:10.106175 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:10 crc kubenswrapper[4786]: I0127 13:07:10.421347 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:29:27.558188132 +0000 UTC Jan 27 13:07:11 crc kubenswrapper[4786]: I0127 13:07:11.061104 4786 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 13:07:11 crc kubenswrapper[4786]: I0127 13:07:11.061171 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 13:07:11 crc kubenswrapper[4786]: I0127 13:07:11.097354 4786 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 27 13:07:11 crc kubenswrapper[4786]: I0127 13:07:11.097722 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 13:07:11 crc kubenswrapper[4786]: I0127 13:07:11.422192 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 17:54:18.631074899 +0000 UTC Jan 27 13:07:12 crc kubenswrapper[4786]: I0127 13:07:12.423496 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:57:16.88797209 +0000 UTC Jan 27 13:07:12 crc kubenswrapper[4786]: I0127 13:07:12.533440 4786 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 13:07:12 crc kubenswrapper[4786]: I0127 13:07:12.533515 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 13:07:13 crc kubenswrapper[4786]: I0127 13:07:13.424577 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 07:25:05.532821367 +0000 UTC Jan 27 13:07:14 crc kubenswrapper[4786]: I0127 13:07:14.424926 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 06:30:50.972993666 +0000 UTC Jan 27 13:07:15 crc kubenswrapper[4786]: I0127 13:07:15.425543 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 03:43:39.589140475 +0000 UTC Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.061733 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.063225 4786 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.063252 4786 trace.go:236] Trace[520727254]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 13:07:05.689) (total time: 10373ms): Jan 27 13:07:16 crc kubenswrapper[4786]: Trace[520727254]: ---"Objects listed" error: 10373ms (13:07:16.063) Jan 27 13:07:16 crc kubenswrapper[4786]: Trace[520727254]: [10.373954768s] [10.373954768s] END Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.063277 4786 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.067000 4786 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.067237 4786 trace.go:236] Trace[222273062]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 13:07:01.918) (total time: 14148ms): Jan 27 13:07:16 crc kubenswrapper[4786]: Trace[222273062]: ---"Objects listed" error: 14148ms (13:07:16.067) Jan 27 13:07:16 crc kubenswrapper[4786]: Trace[222273062]: [14.148493662s] [14.148493662s] END Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.067276 4786 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.067660 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.071734 4786 trace.go:236] Trace[261890706]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 13:07:05.183) (total time: 10888ms): Jan 27 13:07:16 crc kubenswrapper[4786]: Trace[261890706]: ---"Objects listed" error: 10888ms (13:07:16.071) Jan 27 13:07:16 crc kubenswrapper[4786]: Trace[261890706]: [10.888295603s] [10.888295603s] END Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.071761 4786 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.079829 4786 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.094985 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.098482 4786 csr.go:261] certificate signing request csr-pqr9h is approved, waiting to be issued Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.099091 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.099322 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.106232 4786 csr.go:257] certificate signing request csr-pqr9h is issued Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.108978 4786 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59136->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.109035 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59136->192.168.126.11:17697: read: connection reset by peer" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.401447 4786 apiserver.go:52] "Watching apiserver" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.404521 4786 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.404870 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.405223 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.405324 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.405379 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.405589 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.405591 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.405657 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.405628 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.405756 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.405808 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.407279 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.407651 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.407726 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.408073 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.408203 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.408595 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.409279 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.409406 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.413763 4786 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.415940 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.426501 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 05:03:41.889710348 +0000 UTC Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.435893 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.445974 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.455910 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.467033 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.469814 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.469847 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.469864 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.469881 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.469901 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.469917 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.469934 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.469957 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.469981 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470002 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470018 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470066 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470082 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470097 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470140 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470156 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470173 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470190 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470210 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470230 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470273 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470312 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470334 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470357 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470375 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470390 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470404 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470421 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470452 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470466 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470482 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470490 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470496 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470549 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470575 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470596 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470646 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470668 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470705 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470725 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470745 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470773 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470793 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470811 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470828 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470846 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470870 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470888 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470908 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470929 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470952 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470971 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470989 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471006 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471025 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471044 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471062 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471080 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471098 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471118 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471138 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471158 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471186 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471206 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471229 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471254 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471274 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471294 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471316 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471337 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471359 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471378 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471435 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471456 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471476 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471496 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471517 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471539 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471566 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471589 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471630 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471655 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471677 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471702 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471722 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471743 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471768 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471789 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471809 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471830 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471850 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471870 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471891 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471913 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471935 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471956 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471978 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472000 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472023 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472046 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472069 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472091 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472114 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472139 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472162 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472186 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472209 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472232 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472254 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472276 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472302 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472326 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472352 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472377 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472403 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472430 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472454 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472480 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472508 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472533 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472557 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472582 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472623 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472648 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472671 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472694 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472715 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472737 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472759 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472783 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472810 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472835 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472860 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472882 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472907 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472965 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472988 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473031 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473058 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473083 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473141 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473166 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473188 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473209 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473230 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473251 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473273 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473296 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473318 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473342 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473425 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473451 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473476 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473501 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473524 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473547 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473571 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473596 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473639 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473668 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473695 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473717 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473741 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473765 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473790 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473822 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473847 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473871 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473898 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473922 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473954 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473984 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474008 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474030 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474054 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474078 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474106 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474130 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474155 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474180 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474204 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474227 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474254 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474279 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474304 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474330 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474356 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474379 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474402 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474426 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474451 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474475 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474503 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474526 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474551 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474577 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474620 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474675 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474708 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474740 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474768 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474797 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474826 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474855 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474880 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474907 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474932 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474957 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474982 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.475008 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.475036 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.475096 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.475112 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.476380 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.478298 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.480712 4786 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.481067 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470852 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.470839 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.488557 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:07:16.984290593 +0000 UTC m=+20.194904722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471044 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471201 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471533 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.471670 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472144 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472319 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.472495 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473043 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473257 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473419 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.473903 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474751 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474804 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.474816 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.475015 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.475194 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.475298 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.475347 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.476009 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.476178 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.476231 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.476540 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.476828 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.476979 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.477034 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.477156 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.477172 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.477225 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.477255 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.477404 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.477421 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.477725 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.477734 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.477775 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.477873 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.478057 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.478423 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.478468 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.478672 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.479005 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.479088 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.479137 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.479361 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.479260 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.479561 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.480261 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.482678 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.482724 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.476657 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.483181 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.483397 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.483763 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.483805 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.484288 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.484378 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.490735 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.491113 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.491482 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.491645 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.492351 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.492691 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.492809 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.492823 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.493619 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.493682 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.496421 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.496557 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.496773 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.497904 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.498292 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.498337 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.498456 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.498756 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.498766 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.498971 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.499082 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.500133 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.500277 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.500579 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.500711 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.500789 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.501003 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.501012 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.501237 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.501342 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.501472 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.501545 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.501944 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.502525 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.502880 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.503061 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.503129 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.503244 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.503346 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.503583 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.503714 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.503863 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.503858 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.504261 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.504267 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.504692 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.504691 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.504913 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.505228 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.505361 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.505682 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.505790 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.505892 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.505992 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.506164 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.506447 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.507085 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.507104 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.507411 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.507641 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.507843 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.507889 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.507934 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.508187 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.508317 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.508417 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.508498 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.508505 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.508851 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.509323 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.509349 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.509411 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.509326 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.509494 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.509553 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.509835 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.509895 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.510089 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.510127 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.510422 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.510705 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.510712 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.510780 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.510812 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.511049 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.511145 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.511231 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.511316 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.511424 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.511632 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.511963 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.512115 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.512176 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.512260 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.512336 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.512403 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.512429 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.513794 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.513905 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.515123 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.514203 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.515965 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.514327 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.514400 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.514490 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.514742 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.514804 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:17.014739484 +0000 UTC m=+20.225353603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.516243 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:17.016221844 +0000 UTC m=+20.226835963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.515110 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.515158 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.515217 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.515404 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.515495 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.515495 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.515511 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.515542 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.515732 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.515840 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.515915 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.516397 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.516457 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:17.01644966 +0000 UTC m=+20.227063779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.519048 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.519235 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.519406 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.519671 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.519741 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.521098 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.521228 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.521405 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.521742 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.522134 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.523944 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.524285 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.524348 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.524638 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.524731 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.527526 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.527825 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.528175 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.528219 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.529158 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.529836 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.530204 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.531752 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.531785 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.531804 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.531872 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:17.031848036 +0000 UTC m=+20.242462355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.532996 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.533358 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.534494 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.539398 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.543834 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.552890 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.553938 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.560573 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.563804 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.565646 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.566700 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30" exitCode=255 Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.566759 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30"} Jan 27 13:07:16 crc kubenswrapper[4786]: E0127 13:07:16.574352 4786 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.574678 4786 scope.go:117] "RemoveContainer" containerID="144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575643 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575717 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575791 4786 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575804 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575813 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575822 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575832 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575840 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575849 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575858 4786 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575866 4786 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575874 4786 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575883 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575891 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575900 4786 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575909 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575919 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575931 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575940 4786 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575950 4786 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575958 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575966 4786 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575976 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575984 4786 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.575992 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576000 4786 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576008 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576016 4786 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576025 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576033 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576041 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576049 4786 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576058 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576066 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576074 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576082 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576089 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576097 4786 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576106 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576116 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576146 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576170 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576181 4786 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576190 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576201 4786 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576210 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576221 4786 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576234 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576241 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576251 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576265 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576277 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576288 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576298 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576314 4786 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576325 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576335 4786 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576346 4786 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576355 4786 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576367 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576377 4786 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576389 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576399 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576409 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576420 4786 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576429 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576439 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576450 4786 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576460 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576471 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576482 4786 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576494 4786 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576507 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576524 4786 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576546 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576559 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576568 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576577 4786 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576585 4786 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576592 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576621 4786 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576630 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576637 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576648 4786 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576657 4786 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576665 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576673 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576681 4786 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576690 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576699 4786 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576222 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576707 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576796 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.576864 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577002 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577016 4786 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577026 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577034 4786 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577042 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577050 4786 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577059 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577067 4786 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577075 4786 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577083 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577092 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577099 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577109 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577117 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577125 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577134 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577142 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577150 4786 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577161 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577172 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577184 4786 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577194 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577204 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577216 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577228 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577238 4786 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577249 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577260 4786 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577271 4786 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577282 4786 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577294 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577306 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577317 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577332 4786 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577340 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577349 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577358 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577366 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577373 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577384 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577395 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577406 4786 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577416 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577428 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577438 4786 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577448 4786 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577459 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577470 4786 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577481 4786 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577492 4786 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577503 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577525 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577536 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577545 4786 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577553 4786 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577563 4786 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577573 4786 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577583 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577593 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577624 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577636 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577647 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577659 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577671 4786 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577684 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577697 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577708 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577721 4786 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577733 4786 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577745 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577755 4786 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577767 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577778 4786 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577789 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577800 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577811 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577822 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577834 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577844 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577855 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577866 4786 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577878 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577889 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577900 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577908 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577916 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577934 4786 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577944 4786 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577954 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577964 4786 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577974 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577984 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.577994 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.578005 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.578019 4786 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.578030 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.578041 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.578052 4786 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.578064 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.578074 4786 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.578085 4786 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.578096 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.578379 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.594648 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.605119 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.619751 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.633323 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.650732 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.674089 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.701877 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.718091 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.725197 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 13:07:16 crc kubenswrapper[4786]: I0127 13:07:16.731492 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 13:07:16 crc kubenswrapper[4786]: W0127 13:07:16.741925 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-55502371e7ad8001c1bbb6a986d6722f755eb6f8aed958892b3d52ac5d97f103 WatchSource:0}: Error finding container 55502371e7ad8001c1bbb6a986d6722f755eb6f8aed958892b3d52ac5d97f103: Status 404 returned error can't find the container with id 55502371e7ad8001c1bbb6a986d6722f755eb6f8aed958892b3d52ac5d97f103 Jan 27 13:07:16 crc kubenswrapper[4786]: W0127 13:07:16.759833 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-2c8ac15e343c6036bb7b51353d545c917dee47701b2afd2c740ad0a7cc36f99c WatchSource:0}: Error finding container 2c8ac15e343c6036bb7b51353d545c917dee47701b2afd2c740ad0a7cc36f99c: Status 404 returned error can't find the container with id 2c8ac15e343c6036bb7b51353d545c917dee47701b2afd2c740ad0a7cc36f99c Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.082111 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.082180 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.082206 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:17 crc kubenswrapper[4786]: E0127 13:07:17.082271 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:07:18.082248515 +0000 UTC m=+21.292862634 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:07:17 crc kubenswrapper[4786]: E0127 13:07:17.082306 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:07:17 crc kubenswrapper[4786]: E0127 13:07:17.082328 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:07:17 crc kubenswrapper[4786]: E0127 13:07:17.082338 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:17 crc kubenswrapper[4786]: E0127 13:07:17.082361 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:07:17 crc kubenswrapper[4786]: E0127 13:07:17.082383 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:18.082372579 +0000 UTC m=+21.292986698 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.082305 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:17 crc kubenswrapper[4786]: E0127 13:07:17.082405 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:18.082395799 +0000 UTC m=+21.293009918 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.082427 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:17 crc kubenswrapper[4786]: E0127 13:07:17.082431 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:07:17 crc kubenswrapper[4786]: E0127 13:07:17.082479 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:07:17 crc kubenswrapper[4786]: E0127 13:07:17.082492 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:17 crc kubenswrapper[4786]: E0127 13:07:17.082502 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:07:17 crc kubenswrapper[4786]: E0127 13:07:17.082524 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:18.082514462 +0000 UTC m=+21.293128581 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:17 crc kubenswrapper[4786]: E0127 13:07:17.082539 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:18.082532623 +0000 UTC m=+21.293146742 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.107115 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 13:02:16 +0000 UTC, rotation deadline is 2026-11-30 21:37:12.422838412 +0000 UTC Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.107193 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7376h29m55.315649844s for next certificate rotation Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.245661 4786 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 13:07:17 crc kubenswrapper[4786]: W0127 13:07:17.246183 4786 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 27 13:07:17 crc kubenswrapper[4786]: W0127 13:07:17.246236 4786 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 27 13:07:17 crc kubenswrapper[4786]: W0127 13:07:17.246253 4786 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 27 13:07:17 crc kubenswrapper[4786]: W0127 13:07:17.246251 4786 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 27 13:07:17 crc kubenswrapper[4786]: W0127 13:07:17.246291 4786 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 27 13:07:17 crc kubenswrapper[4786]: W0127 13:07:17.246299 4786 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 27 13:07:17 crc kubenswrapper[4786]: W0127 13:07:17.246251 4786 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: very short watch: pkg/kubelet/config/apiserver.go:66: Unexpected watch close - watch lasted less than a second and no items received Jan 27 13:07:17 crc kubenswrapper[4786]: W0127 13:07:17.246276 4786 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 27 13:07:17 crc kubenswrapper[4786]: W0127 13:07:17.246329 4786 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 27 13:07:17 crc kubenswrapper[4786]: W0127 13:07:17.246336 4786 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 27 13:07:17 crc kubenswrapper[4786]: E0127 13:07:17.246181 4786 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.5:47018->38.102.83.5:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e985a7ba471b2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 13:06:57.992036786 +0000 UTC m=+1.202650905,LastTimestamp:2026-01-27 13:06:57.992036786 +0000 UTC m=+1.202650905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.427540 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 19:34:50.687536231 +0000 UTC Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.469136 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.469700 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.470468 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.471088 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.471651 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.472165 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.472800 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.473340 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.475361 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.475987 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.476877 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.477678 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.478594 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.479109 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.480068 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.480570 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.481265 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.485073 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.485932 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.486675 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.487236 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.487869 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.487959 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.488463 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.489167 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.489676 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.490394 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.491391 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.491925 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.492540 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.493211 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.494918 4786 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.495015 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.496555 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.497421 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.497821 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.499207 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.499801 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.500592 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.502425 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.503440 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.504180 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.504765 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.505780 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.506680 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.507132 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.508035 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.508559 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.510337 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.510861 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.511503 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.511997 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.512519 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.512569 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.513124 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.513622 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.528518 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.540902 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.564301 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.575103 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d"} Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.575260 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0d92c199181a73aba7f142cf20fc509542ddee48ac0cadebe1ab851affdafda0"} Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.577829 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.582513 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624"} Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.583027 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.584544 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef"} Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.584580 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f"} Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.584592 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2c8ac15e343c6036bb7b51353d545c917dee47701b2afd2c740ad0a7cc36f99c"} Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.584977 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.585624 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"55502371e7ad8001c1bbb6a986d6722f755eb6f8aed958892b3d52ac5d97f103"} Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.600519 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.616830 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.630215 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.647323 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.671448 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.686575 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.710146 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:17 crc kubenswrapper[4786]: I0127 13:07:17.731504 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.078727 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.093024 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.093124 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.093164 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.093191 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.093219 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:18 crc kubenswrapper[4786]: E0127 13:07:18.093258 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:07:20.093221551 +0000 UTC m=+23.303835670 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:07:18 crc kubenswrapper[4786]: E0127 13:07:18.093343 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:07:18 crc kubenswrapper[4786]: E0127 13:07:18.093398 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:07:18 crc kubenswrapper[4786]: E0127 13:07:18.093428 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:07:18 crc kubenswrapper[4786]: E0127 13:07:18.093447 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:07:18 crc kubenswrapper[4786]: E0127 13:07:18.093575 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:18 crc kubenswrapper[4786]: E0127 13:07:18.093414 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:20.093397926 +0000 UTC m=+23.304012125 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:07:18 crc kubenswrapper[4786]: E0127 13:07:18.093398 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:07:18 crc kubenswrapper[4786]: E0127 13:07:18.093685 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:20.093656743 +0000 UTC m=+23.304270862 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:07:18 crc kubenswrapper[4786]: E0127 13:07:18.093720 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:20.093707444 +0000 UTC m=+23.304321563 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:18 crc kubenswrapper[4786]: E0127 13:07:18.093740 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:07:18 crc kubenswrapper[4786]: E0127 13:07:18.093772 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:18 crc kubenswrapper[4786]: E0127 13:07:18.093868 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:20.093853108 +0000 UTC m=+23.304467427 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.133907 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.164722 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.182640 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.183905 4786 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.184666 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bsqnf"] Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.185075 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bsqnf" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.186848 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.186852 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.187225 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.200768 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.214897 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.233459 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.241664 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.249476 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.264717 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.276389 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.290305 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.294915 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfc2t\" (UniqueName: \"kubernetes.io/projected/bcd62799-0a3c-4682-acb6-a2bc9121b391-kube-api-access-lfc2t\") pod \"node-resolver-bsqnf\" (UID: \"bcd62799-0a3c-4682-acb6-a2bc9121b391\") " pod="openshift-dns/node-resolver-bsqnf" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.294962 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bcd62799-0a3c-4682-acb6-a2bc9121b391-hosts-file\") pod \"node-resolver-bsqnf\" (UID: \"bcd62799-0a3c-4682-acb6-a2bc9121b391\") " pod="openshift-dns/node-resolver-bsqnf" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.301588 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.370500 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.396062 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfc2t\" (UniqueName: \"kubernetes.io/projected/bcd62799-0a3c-4682-acb6-a2bc9121b391-kube-api-access-lfc2t\") pod \"node-resolver-bsqnf\" (UID: \"bcd62799-0a3c-4682-acb6-a2bc9121b391\") " pod="openshift-dns/node-resolver-bsqnf" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.396123 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bcd62799-0a3c-4682-acb6-a2bc9121b391-hosts-file\") pod \"node-resolver-bsqnf\" (UID: \"bcd62799-0a3c-4682-acb6-a2bc9121b391\") " pod="openshift-dns/node-resolver-bsqnf" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.396287 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bcd62799-0a3c-4682-acb6-a2bc9121b391-hosts-file\") pod \"node-resolver-bsqnf\" (UID: \"bcd62799-0a3c-4682-acb6-a2bc9121b391\") " pod="openshift-dns/node-resolver-bsqnf" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.414150 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfc2t\" (UniqueName: \"kubernetes.io/projected/bcd62799-0a3c-4682-acb6-a2bc9121b391-kube-api-access-lfc2t\") pod \"node-resolver-bsqnf\" (UID: \"bcd62799-0a3c-4682-acb6-a2bc9121b391\") " pod="openshift-dns/node-resolver-bsqnf" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.428179 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 09:38:35.753711905 +0000 UTC Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.464653 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.464728 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.464783 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:18 crc kubenswrapper[4786]: E0127 13:07:18.464847 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:18 crc kubenswrapper[4786]: E0127 13:07:18.464965 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:18 crc kubenswrapper[4786]: E0127 13:07:18.465087 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.491452 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.496917 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bsqnf" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.593572 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bsqnf" event={"ID":"bcd62799-0a3c-4682-acb6-a2bc9121b391","Type":"ContainerStarted","Data":"89f5720600317445300e4bf436df5231b22a7151612387d65ecc6acb2bafc96c"} Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.641830 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-prn84"] Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.642989 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-7bxtk"] Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.643185 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.643210 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9q6dk"] Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.643392 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.643785 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.653762 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.654194 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.654815 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.654861 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.655047 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.655159 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.655200 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.655462 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.655491 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.655498 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.656234 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.687986 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.716954 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.734061 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.759963 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.799723 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800247 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-multus-cni-dir\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800301 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phqbk\" (UniqueName: \"kubernetes.io/projected/2c6a2646-52f7-41be-8a81-3fed6eac75cc-kube-api-access-phqbk\") pod \"machine-config-daemon-7bxtk\" (UID: \"2c6a2646-52f7-41be-8a81-3fed6eac75cc\") " pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800346 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d153375a-777b-4331-992a-81c845c6d6eb-system-cni-dir\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800378 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a290f38c-b94c-4233-9d98-9a54a728cedb-cni-binary-copy\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800417 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-run-netns\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800437 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d153375a-777b-4331-992a-81c845c6d6eb-os-release\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800481 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj2bq\" (UniqueName: \"kubernetes.io/projected/d153375a-777b-4331-992a-81c845c6d6eb-kube-api-access-gj2bq\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800505 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d153375a-777b-4331-992a-81c845c6d6eb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800530 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-hostroot\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800620 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2c6a2646-52f7-41be-8a81-3fed6eac75cc-rootfs\") pod \"machine-config-daemon-7bxtk\" (UID: \"2c6a2646-52f7-41be-8a81-3fed6eac75cc\") " pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800677 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-run-k8s-cni-cncf-io\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800713 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-os-release\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800731 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c6a2646-52f7-41be-8a81-3fed6eac75cc-proxy-tls\") pod \"machine-config-daemon-7bxtk\" (UID: \"2c6a2646-52f7-41be-8a81-3fed6eac75cc\") " pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800754 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a290f38c-b94c-4233-9d98-9a54a728cedb-multus-daemon-config\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800795 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-var-lib-cni-multus\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800812 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d153375a-777b-4331-992a-81c845c6d6eb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800836 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-cnibin\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800852 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-multus-socket-dir-parent\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800868 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-var-lib-kubelet\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800886 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5xht\" (UniqueName: \"kubernetes.io/projected/a290f38c-b94c-4233-9d98-9a54a728cedb-kube-api-access-d5xht\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800930 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-run-multus-certs\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.800981 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d153375a-777b-4331-992a-81c845c6d6eb-cnibin\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.801033 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-system-cni-dir\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.801078 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-var-lib-cni-bin\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.801104 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-etc-kubernetes\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.801156 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c6a2646-52f7-41be-8a81-3fed6eac75cc-mcd-auth-proxy-config\") pod \"machine-config-daemon-7bxtk\" (UID: \"2c6a2646-52f7-41be-8a81-3fed6eac75cc\") " pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.801230 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d153375a-777b-4331-992a-81c845c6d6eb-cni-binary-copy\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.801264 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-multus-conf-dir\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.803599 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.858732 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.875640 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.890146 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.901042 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.902502 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-run-k8s-cni-cncf-io\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.902633 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-run-k8s-cni-cncf-io\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.902642 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-os-release\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.902724 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c6a2646-52f7-41be-8a81-3fed6eac75cc-proxy-tls\") pod \"machine-config-daemon-7bxtk\" (UID: \"2c6a2646-52f7-41be-8a81-3fed6eac75cc\") " pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.902747 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a290f38c-b94c-4233-9d98-9a54a728cedb-multus-daemon-config\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.902766 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-var-lib-cni-multus\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.902786 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d153375a-777b-4331-992a-81c845c6d6eb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.902824 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5xht\" (UniqueName: \"kubernetes.io/projected/a290f38c-b94c-4233-9d98-9a54a728cedb-kube-api-access-d5xht\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.902842 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-cnibin\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.902896 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-multus-socket-dir-parent\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.902911 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-var-lib-kubelet\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.902952 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-run-multus-certs\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.902938 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-var-lib-cni-multus\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.902995 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d153375a-777b-4331-992a-81c845c6d6eb-cnibin\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.902969 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d153375a-777b-4331-992a-81c845c6d6eb-cnibin\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903042 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c6a2646-52f7-41be-8a81-3fed6eac75cc-mcd-auth-proxy-config\") pod \"machine-config-daemon-7bxtk\" (UID: \"2c6a2646-52f7-41be-8a81-3fed6eac75cc\") " pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903079 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-system-cni-dir\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903103 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-var-lib-cni-bin\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903123 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-etc-kubernetes\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903147 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-multus-conf-dir\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903167 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d153375a-777b-4331-992a-81c845c6d6eb-cni-binary-copy\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903191 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-multus-cni-dir\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903188 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-var-lib-kubelet\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903208 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phqbk\" (UniqueName: \"kubernetes.io/projected/2c6a2646-52f7-41be-8a81-3fed6eac75cc-kube-api-access-phqbk\") pod \"machine-config-daemon-7bxtk\" (UID: \"2c6a2646-52f7-41be-8a81-3fed6eac75cc\") " pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903222 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-run-multus-certs\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903225 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d153375a-777b-4331-992a-81c845c6d6eb-system-cni-dir\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903250 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d153375a-777b-4331-992a-81c845c6d6eb-os-release\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903269 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj2bq\" (UniqueName: \"kubernetes.io/projected/d153375a-777b-4331-992a-81c845c6d6eb-kube-api-access-gj2bq\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903291 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a290f38c-b94c-4233-9d98-9a54a728cedb-cni-binary-copy\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903312 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-run-netns\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903329 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-hostroot\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903350 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2c6a2646-52f7-41be-8a81-3fed6eac75cc-rootfs\") pod \"machine-config-daemon-7bxtk\" (UID: \"2c6a2646-52f7-41be-8a81-3fed6eac75cc\") " pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903370 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d153375a-777b-4331-992a-81c845c6d6eb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903804 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-os-release\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903861 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-multus-conf-dir\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903046 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-cnibin\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903905 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-system-cni-dir\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903928 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-var-lib-cni-bin\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903949 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-etc-kubernetes\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903169 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-multus-socket-dir-parent\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.904118 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c6a2646-52f7-41be-8a81-3fed6eac75cc-mcd-auth-proxy-config\") pod \"machine-config-daemon-7bxtk\" (UID: \"2c6a2646-52f7-41be-8a81-3fed6eac75cc\") " pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.904218 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a290f38c-b94c-4233-9d98-9a54a728cedb-multus-daemon-config\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.904302 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d153375a-777b-4331-992a-81c845c6d6eb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.904349 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-host-run-netns\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.904375 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-hostroot\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.904399 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2c6a2646-52f7-41be-8a81-3fed6eac75cc-rootfs\") pod \"machine-config-daemon-7bxtk\" (UID: \"2c6a2646-52f7-41be-8a81-3fed6eac75cc\") " pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.903249 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d153375a-777b-4331-992a-81c845c6d6eb-system-cni-dir\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.904449 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d153375a-777b-4331-992a-81c845c6d6eb-os-release\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.904492 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a290f38c-b94c-4233-9d98-9a54a728cedb-multus-cni-dir\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.904536 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d153375a-777b-4331-992a-81c845c6d6eb-cni-binary-copy\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.904766 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a290f38c-b94c-4233-9d98-9a54a728cedb-cni-binary-copy\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.904853 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d153375a-777b-4331-992a-81c845c6d6eb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.907861 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c6a2646-52f7-41be-8a81-3fed6eac75cc-proxy-tls\") pod \"machine-config-daemon-7bxtk\" (UID: \"2c6a2646-52f7-41be-8a81-3fed6eac75cc\") " pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.919278 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5xht\" (UniqueName: \"kubernetes.io/projected/a290f38c-b94c-4233-9d98-9a54a728cedb-kube-api-access-d5xht\") pod \"multus-9q6dk\" (UID: \"a290f38c-b94c-4233-9d98-9a54a728cedb\") " pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.920958 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phqbk\" (UniqueName: \"kubernetes.io/projected/2c6a2646-52f7-41be-8a81-3fed6eac75cc-kube-api-access-phqbk\") pod \"machine-config-daemon-7bxtk\" (UID: \"2c6a2646-52f7-41be-8a81-3fed6eac75cc\") " pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.921243 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.922103 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj2bq\" (UniqueName: \"kubernetes.io/projected/d153375a-777b-4331-992a-81c845c6d6eb-kube-api-access-gj2bq\") pod \"multus-additional-cni-plugins-prn84\" (UID: \"d153375a-777b-4331-992a-81c845c6d6eb\") " pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.936786 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.954419 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.964067 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.968264 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.976971 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9q6dk" Jan 27 13:07:18 crc kubenswrapper[4786]: W0127 13:07:18.977121 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c6a2646_52f7_41be_8a81_3fed6eac75cc.slice/crio-306bc1d4458020375e17c12be519e9724d89207d2d2013ba05027a1950756270 WatchSource:0}: Error finding container 306bc1d4458020375e17c12be519e9724d89207d2d2013ba05027a1950756270: Status 404 returned error can't find the container with id 306bc1d4458020375e17c12be519e9724d89207d2d2013ba05027a1950756270 Jan 27 13:07:18 crc kubenswrapper[4786]: I0127 13:07:18.990726 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:18Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:18 crc kubenswrapper[4786]: W0127 13:07:18.994445 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda290f38c_b94c_4233_9d98_9a54a728cedb.slice/crio-b4dbbe6ff9161c92cc1766a41fb2c4a331b5a087e1734499dd4cd831cc052f28 WatchSource:0}: Error finding container b4dbbe6ff9161c92cc1766a41fb2c4a331b5a087e1734499dd4cd831cc052f28: Status 404 returned error can't find the container with id b4dbbe6ff9161c92cc1766a41fb2c4a331b5a087e1734499dd4cd831cc052f28 Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.005465 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-prn84" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.017818 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: W0127 13:07:19.021647 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd153375a_777b_4331_992a_81c845c6d6eb.slice/crio-08edda7dbfed5e4ce53a46b1239e85ad3e5b91e2b51db4bcdc1ac50f7ac7b461 WatchSource:0}: Error finding container 08edda7dbfed5e4ce53a46b1239e85ad3e5b91e2b51db4bcdc1ac50f7ac7b461: Status 404 returned error can't find the container with id 08edda7dbfed5e4ce53a46b1239e85ad3e5b91e2b51db4bcdc1ac50f7ac7b461 Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.034984 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.046210 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.062408 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6d56q"] Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.063438 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.063436 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: W0127 13:07:19.065300 4786 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 13:07:19 crc kubenswrapper[4786]: E0127 13:07:19.065416 4786 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 13:07:19 crc kubenswrapper[4786]: W0127 13:07:19.065698 4786 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 13:07:19 crc kubenswrapper[4786]: E0127 13:07:19.065784 4786 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.066955 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.067655 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.067763 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.067809 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.067955 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.082857 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.104435 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.124415 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.137556 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.158795 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.170943 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.186524 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.203997 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206526 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-log-socket\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206581 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-run-ovn-kubernetes\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206641 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-cni-bin\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206674 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovn-node-metrics-cert\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206700 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rgf8\" (UniqueName: \"kubernetes.io/projected/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-kube-api-access-5rgf8\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206721 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-ovn\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206755 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-run-netns\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206777 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-openvswitch\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206797 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-cni-netd\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206830 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-node-log\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206855 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206879 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-env-overrides\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206903 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovnkube-script-lib\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206927 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-var-lib-openvswitch\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206950 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-systemd\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206971 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-etc-openvswitch\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.206993 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-slash\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.207017 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-systemd-units\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.207038 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-kubelet\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.207069 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovnkube-config\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.223075 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.239971 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.254894 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.280049 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.293961 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.307971 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-node-log\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308035 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-var-lib-openvswitch\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308065 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308091 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-env-overrides\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308094 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-node-log\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308113 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovnkube-script-lib\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308136 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-systemd\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308158 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-etc-openvswitch\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308184 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-var-lib-openvswitch\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308188 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-slash\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308219 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-slash\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308222 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-systemd-units\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308251 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-kubelet\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308271 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovnkube-config\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308289 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-log-socket\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308316 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-run-ovn-kubernetes\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308331 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-cni-bin\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308385 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovn-node-metrics-cert\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308404 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rgf8\" (UniqueName: \"kubernetes.io/projected/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-kube-api-access-5rgf8\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308421 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-ovn\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308471 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-run-netns\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308487 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-openvswitch\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308523 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-cni-netd\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308573 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-cni-netd\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308596 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-systemd-units\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308634 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-kubelet\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308704 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-log-socket\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308731 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-ovn\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308779 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-run-netns\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308805 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-openvswitch\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308829 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-systemd\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308966 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-env-overrides\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.309016 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-etc-openvswitch\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.308158 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.309089 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-cni-bin\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.309126 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-run-ovn-kubernetes\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.309401 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovnkube-script-lib\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.311391 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.313673 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovn-node-metrics-cert\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.325168 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.327498 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rgf8\" (UniqueName: \"kubernetes.io/projected/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-kube-api-access-5rgf8\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.342785 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.428631 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 10:57:55.376936259 +0000 UTC Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.537027 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.547741 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.548851 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.555526 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.568507 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.581131 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.593166 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.597837 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570"} Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.599180 4786 generic.go:334] "Generic (PLEG): container finished" podID="d153375a-777b-4331-992a-81c845c6d6eb" containerID="65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103" exitCode=0 Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.599257 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" event={"ID":"d153375a-777b-4331-992a-81c845c6d6eb","Type":"ContainerDied","Data":"65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103"} Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.599293 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" event={"ID":"d153375a-777b-4331-992a-81c845c6d6eb","Type":"ContainerStarted","Data":"08edda7dbfed5e4ce53a46b1239e85ad3e5b91e2b51db4bcdc1ac50f7ac7b461"} Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.600644 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bsqnf" event={"ID":"bcd62799-0a3c-4682-acb6-a2bc9121b391","Type":"ContainerStarted","Data":"27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9"} Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.602169 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9q6dk" event={"ID":"a290f38c-b94c-4233-9d98-9a54a728cedb","Type":"ContainerStarted","Data":"8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60"} Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.602211 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9q6dk" event={"ID":"a290f38c-b94c-4233-9d98-9a54a728cedb","Type":"ContainerStarted","Data":"b4dbbe6ff9161c92cc1766a41fb2c4a331b5a087e1734499dd4cd831cc052f28"} Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.609583 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerStarted","Data":"e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162"} Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.609722 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerStarted","Data":"8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7"} Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.609741 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerStarted","Data":"306bc1d4458020375e17c12be519e9724d89207d2d2013ba05027a1950756270"} Jan 27 13:07:19 crc kubenswrapper[4786]: E0127 13:07:19.618138 4786 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.618194 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.636576 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.648239 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.662979 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.676954 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.692144 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.709704 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.722917 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.737503 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.750741 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.768470 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.782574 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.800728 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.816808 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.829497 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.844164 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.856991 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.872419 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.887734 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.903213 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.917276 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:19 crc kubenswrapper[4786]: I0127 13:07:19.977723 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.116977 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.117110 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.117122 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:07:24.117104015 +0000 UTC m=+27.327718134 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.117151 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.117175 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.117194 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.117225 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.117237 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.117248 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.117275 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.117282 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:24.117273839 +0000 UTC m=+27.327887958 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.117306 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:24.11729672 +0000 UTC m=+27.327910839 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.117348 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.117402 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.117413 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.117355 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.117471 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:24.117454175 +0000 UTC m=+27.328068294 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.117620 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:24.117575848 +0000 UTC m=+27.328189967 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.143471 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.157411 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.163158 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.168987 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.179572 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-bh896"] Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.180014 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bh896" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.182807 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.183083 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.183227 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.184079 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.190592 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.222672 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.237462 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.255926 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.272184 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.284884 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.303711 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.309897 4786 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.310225 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovnkube-config podName:ad21a31d-efbf-4c10-b3d1-0f6cf71793bd nodeName:}" failed. No retries permitted until 2026-01-27 13:07:20.810192724 +0000 UTC m=+24.020806843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovnkube-config") pod "ovnkube-node-6d56q" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd") : failed to sync configmap cache: timed out waiting for the condition Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.318403 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/22e89788-1c8e-46a7-97b4-3981352bb420-serviceca\") pod \"node-ca-bh896\" (UID: \"22e89788-1c8e-46a7-97b4-3981352bb420\") " pod="openshift-image-registry/node-ca-bh896" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.318481 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dl86\" (UniqueName: \"kubernetes.io/projected/22e89788-1c8e-46a7-97b4-3981352bb420-kube-api-access-8dl86\") pod \"node-ca-bh896\" (UID: \"22e89788-1c8e-46a7-97b4-3981352bb420\") " pod="openshift-image-registry/node-ca-bh896" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.318593 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22e89788-1c8e-46a7-97b4-3981352bb420-host\") pod \"node-ca-bh896\" (UID: \"22e89788-1c8e-46a7-97b4-3981352bb420\") " pod="openshift-image-registry/node-ca-bh896" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.321487 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.348157 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.372575 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.390836 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.406895 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.419842 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22e89788-1c8e-46a7-97b4-3981352bb420-host\") pod \"node-ca-bh896\" (UID: \"22e89788-1c8e-46a7-97b4-3981352bb420\") " pod="openshift-image-registry/node-ca-bh896" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.420217 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/22e89788-1c8e-46a7-97b4-3981352bb420-serviceca\") pod \"node-ca-bh896\" (UID: \"22e89788-1c8e-46a7-97b4-3981352bb420\") " pod="openshift-image-registry/node-ca-bh896" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.420332 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dl86\" (UniqueName: \"kubernetes.io/projected/22e89788-1c8e-46a7-97b4-3981352bb420-kube-api-access-8dl86\") pod \"node-ca-bh896\" (UID: \"22e89788-1c8e-46a7-97b4-3981352bb420\") " pod="openshift-image-registry/node-ca-bh896" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.420791 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22e89788-1c8e-46a7-97b4-3981352bb420-host\") pod \"node-ca-bh896\" (UID: \"22e89788-1c8e-46a7-97b4-3981352bb420\") " pod="openshift-image-registry/node-ca-bh896" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.421850 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/22e89788-1c8e-46a7-97b4-3981352bb420-serviceca\") pod \"node-ca-bh896\" (UID: \"22e89788-1c8e-46a7-97b4-3981352bb420\") " pod="openshift-image-registry/node-ca-bh896" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.426788 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.429063 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:55:24.378305491 +0000 UTC Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.441159 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.453318 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.464869 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.464941 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.465164 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.465272 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.465399 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:20 crc kubenswrapper[4786]: E0127 13:07:20.465582 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.471395 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.478972 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dl86\" (UniqueName: \"kubernetes.io/projected/22e89788-1c8e-46a7-97b4-3981352bb420-kube-api-access-8dl86\") pod \"node-ca-bh896\" (UID: \"22e89788-1c8e-46a7-97b4-3981352bb420\") " pod="openshift-image-registry/node-ca-bh896" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.484968 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.493781 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bh896" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.509685 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.525176 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.541479 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.554762 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.578525 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.614574 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.616996 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bh896" event={"ID":"22e89788-1c8e-46a7-97b4-3981352bb420","Type":"ContainerStarted","Data":"6106b17169fe35e15d8747d830e2e37c3eeebfb7f8a621944eda5d59035c834b"} Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.624838 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.626917 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" event={"ID":"d153375a-777b-4331-992a-81c845c6d6eb","Type":"ContainerStarted","Data":"0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca"} Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.712870 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.738203 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.762639 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.793401 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.825065 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovnkube-config\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.826311 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovnkube-config\") pod \"ovnkube-node-6d56q\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.839962 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.877636 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.894899 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.920643 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:20 crc kubenswrapper[4786]: I0127 13:07:20.952786 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.001845 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.035316 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.075152 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.113212 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.157446 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.196825 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.232765 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.276685 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.314517 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.355425 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.395298 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.429439 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:42:47.034760979 +0000 UTC Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.632921 4786 generic.go:334] "Generic (PLEG): container finished" podID="d153375a-777b-4331-992a-81c845c6d6eb" containerID="0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca" exitCode=0 Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.632959 4786 generic.go:334] "Generic (PLEG): container finished" podID="d153375a-777b-4331-992a-81c845c6d6eb" containerID="eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9" exitCode=0 Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.633000 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" event={"ID":"d153375a-777b-4331-992a-81c845c6d6eb","Type":"ContainerDied","Data":"0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca"} Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.633028 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" event={"ID":"d153375a-777b-4331-992a-81c845c6d6eb","Type":"ContainerDied","Data":"eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9"} Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.636304 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bh896" event={"ID":"22e89788-1c8e-46a7-97b4-3981352bb420","Type":"ContainerStarted","Data":"3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b"} Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.639254 4786 generic.go:334] "Generic (PLEG): container finished" podID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerID="6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f" exitCode=0 Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.639297 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerDied","Data":"6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f"} Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.639319 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerStarted","Data":"05a861605dad0e44cd137e3dcb8abd841b6ef4f225479e130cbb7feea7399bd8"} Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.654702 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.677062 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.695252 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.721129 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.737181 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.758021 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.775179 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.791768 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.807320 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.823466 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.844804 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.875998 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.918621 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:21 crc kubenswrapper[4786]: I0127 13:07:21.954516 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.005129 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.033699 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.086708 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.114245 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.159107 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.204495 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.240716 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.277683 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.316629 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.353092 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.395785 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.429787 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:38:46.584850393 +0000 UTC Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.435122 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.464594 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.464647 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.464702 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:22 crc kubenswrapper[4786]: E0127 13:07:22.464815 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:22 crc kubenswrapper[4786]: E0127 13:07:22.464952 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:22 crc kubenswrapper[4786]: E0127 13:07:22.465020 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.467882 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.470162 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.470191 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.470200 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.470274 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.473205 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.527291 4786 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.527581 4786 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.528736 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.528775 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.528787 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.528804 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.528813 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:22Z","lastTransitionTime":"2026-01-27T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:22 crc kubenswrapper[4786]: E0127 13:07:22.547180 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.554988 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.555074 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.555085 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.555104 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.555114 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:22Z","lastTransitionTime":"2026-01-27T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.561250 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: E0127 13:07:22.568043 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.571556 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.571598 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.571619 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.571637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.571648 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:22Z","lastTransitionTime":"2026-01-27T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:22 crc kubenswrapper[4786]: E0127 13:07:22.582424 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.585806 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.585850 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.585866 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.585886 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.585900 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:22Z","lastTransitionTime":"2026-01-27T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.594698 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: E0127 13:07:22.600552 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.605670 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.605730 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.605744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.605768 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.605784 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:22Z","lastTransitionTime":"2026-01-27T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:22 crc kubenswrapper[4786]: E0127 13:07:22.619099 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: E0127 13:07:22.619234 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.621832 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.621900 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.621919 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.621949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.621969 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:22Z","lastTransitionTime":"2026-01-27T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.641491 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.647213 4786 generic.go:334] "Generic (PLEG): container finished" podID="d153375a-777b-4331-992a-81c845c6d6eb" containerID="2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4" exitCode=0 Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.647299 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" event={"ID":"d153375a-777b-4331-992a-81c845c6d6eb","Type":"ContainerDied","Data":"2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4"} Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.653018 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerStarted","Data":"0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0"} Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.653050 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerStarted","Data":"97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1"} Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.653065 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerStarted","Data":"74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c"} Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.653078 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerStarted","Data":"0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43"} Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.653092 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerStarted","Data":"81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe"} Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.653301 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerStarted","Data":"fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9"} Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.694152 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.719326 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.726178 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.726229 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.726244 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.726267 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.726285 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:22Z","lastTransitionTime":"2026-01-27T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.757730 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.797954 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.829845 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.829877 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.829889 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.829905 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.829916 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:22Z","lastTransitionTime":"2026-01-27T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.835567 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.875695 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.914662 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.932816 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.932875 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.932885 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.932909 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.932923 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:22Z","lastTransitionTime":"2026-01-27T13:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.953704 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:22 crc kubenswrapper[4786]: I0127 13:07:22.997811 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.037532 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.037593 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.037641 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.037665 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.037679 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:23Z","lastTransitionTime":"2026-01-27T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.039224 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.076706 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.114513 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.140801 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.140851 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.140865 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.140884 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.140896 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:23Z","lastTransitionTime":"2026-01-27T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.162696 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.195551 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.237193 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.244467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.244536 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.244547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.244565 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.244585 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:23Z","lastTransitionTime":"2026-01-27T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.347351 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.347385 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.347396 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.347412 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.347423 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:23Z","lastTransitionTime":"2026-01-27T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.430918 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 19:59:24.341507139 +0000 UTC Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.451124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.451181 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.451200 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.451223 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.451240 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:23Z","lastTransitionTime":"2026-01-27T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.554293 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.554339 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.554347 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.554362 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.554371 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:23Z","lastTransitionTime":"2026-01-27T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.656419 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.656476 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.656494 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.656522 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.656539 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:23Z","lastTransitionTime":"2026-01-27T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.659791 4786 generic.go:334] "Generic (PLEG): container finished" podID="d153375a-777b-4331-992a-81c845c6d6eb" containerID="1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21" exitCode=0 Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.659839 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" event={"ID":"d153375a-777b-4331-992a-81c845c6d6eb","Type":"ContainerDied","Data":"1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21"} Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.678874 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.701521 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.720977 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.738232 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.754450 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.759941 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.759975 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.759984 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.760037 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.760048 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:23Z","lastTransitionTime":"2026-01-27T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.774958 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.790081 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.804349 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.816255 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.831386 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.842734 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.861938 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.861964 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.861972 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.861995 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.862006 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:23Z","lastTransitionTime":"2026-01-27T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.865866 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.880699 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.895682 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.907490 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:23Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.964316 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.964363 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.964380 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.964403 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:23 crc kubenswrapper[4786]: I0127 13:07:23.964419 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:23Z","lastTransitionTime":"2026-01-27T13:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.067366 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.067400 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.067409 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.067422 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.067432 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:24Z","lastTransitionTime":"2026-01-27T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.160504 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.160666 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.160706 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.160731 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.160754 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:24 crc kubenswrapper[4786]: E0127 13:07:24.160853 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:07:24 crc kubenswrapper[4786]: E0127 13:07:24.160886 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:07:24 crc kubenswrapper[4786]: E0127 13:07:24.160913 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:32.160895605 +0000 UTC m=+35.371509724 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:07:24 crc kubenswrapper[4786]: E0127 13:07:24.160928 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:07:24 crc kubenswrapper[4786]: E0127 13:07:24.160959 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:24 crc kubenswrapper[4786]: E0127 13:07:24.161010 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:07:24 crc kubenswrapper[4786]: E0127 13:07:24.161056 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:32.161032289 +0000 UTC m=+35.371646448 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:24 crc kubenswrapper[4786]: E0127 13:07:24.161147 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:32.161113451 +0000 UTC m=+35.371727610 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:07:24 crc kubenswrapper[4786]: E0127 13:07:24.160886 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:07:24 crc kubenswrapper[4786]: E0127 13:07:24.161203 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:07:24 crc kubenswrapper[4786]: E0127 13:07:24.161223 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:24 crc kubenswrapper[4786]: E0127 13:07:24.161283 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:32.161264255 +0000 UTC m=+35.371878464 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:24 crc kubenswrapper[4786]: E0127 13:07:24.161352 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:07:32.161336937 +0000 UTC m=+35.371951096 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.171167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.171210 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.171231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.171259 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.171282 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:24Z","lastTransitionTime":"2026-01-27T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.274206 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.274264 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.274282 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.274332 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.274351 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:24Z","lastTransitionTime":"2026-01-27T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.377237 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.377279 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.377296 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.377316 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.377328 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:24Z","lastTransitionTime":"2026-01-27T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.432128 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:11:15.557691386 +0000 UTC Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.463985 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.464075 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:24 crc kubenswrapper[4786]: E0127 13:07:24.464150 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.464206 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:24 crc kubenswrapper[4786]: E0127 13:07:24.464245 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:24 crc kubenswrapper[4786]: E0127 13:07:24.464405 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.479279 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.479341 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.479351 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.479370 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.479381 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:24Z","lastTransitionTime":"2026-01-27T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.581876 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.581912 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.581922 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.581939 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.581951 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:24Z","lastTransitionTime":"2026-01-27T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.676289 4786 generic.go:334] "Generic (PLEG): container finished" podID="d153375a-777b-4331-992a-81c845c6d6eb" containerID="f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023" exitCode=0 Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.676351 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" event={"ID":"d153375a-777b-4331-992a-81c845c6d6eb","Type":"ContainerDied","Data":"f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023"} Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.684826 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.684872 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.684884 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.684902 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.684914 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:24Z","lastTransitionTime":"2026-01-27T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.736367 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.750045 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.765729 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.781047 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.787887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.787956 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.787973 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.788001 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.788018 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:24Z","lastTransitionTime":"2026-01-27T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.798834 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.812666 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.825250 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.840287 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.853267 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.867078 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.879091 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.889861 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.889900 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.889911 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.889929 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.889940 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:24Z","lastTransitionTime":"2026-01-27T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.896957 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.911349 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.926359 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.939081 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.993478 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.993528 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.993544 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.993563 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:24 crc kubenswrapper[4786]: I0127 13:07:24.993574 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:24Z","lastTransitionTime":"2026-01-27T13:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.096771 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.096832 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.096844 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.096870 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.096882 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:25Z","lastTransitionTime":"2026-01-27T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.200003 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.200046 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.200058 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.200077 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.200090 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:25Z","lastTransitionTime":"2026-01-27T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.302036 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.302063 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.302072 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.302086 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.302096 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:25Z","lastTransitionTime":"2026-01-27T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.404180 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.404211 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.404221 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.404238 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.404249 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:25Z","lastTransitionTime":"2026-01-27T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.432732 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:44:36.054744946 +0000 UTC Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.507113 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.507172 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.507194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.507217 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.507230 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:25Z","lastTransitionTime":"2026-01-27T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.610233 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.610276 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.610290 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.610308 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.610331 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:25Z","lastTransitionTime":"2026-01-27T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.684211 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" event={"ID":"d153375a-777b-4331-992a-81c845c6d6eb","Type":"ContainerStarted","Data":"3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383"} Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.689726 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerStarted","Data":"244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284"} Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.701527 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.714470 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.714520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.714532 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.714553 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.714566 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:25Z","lastTransitionTime":"2026-01-27T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.721408 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.738534 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.754133 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.768161 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.781236 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.809167 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.817886 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.817955 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.817975 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.818001 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.818021 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:25Z","lastTransitionTime":"2026-01-27T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.822837 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.836551 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.846861 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.866423 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.882114 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.894951 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.914437 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.920050 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.920083 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.920101 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.920117 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.920127 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:25Z","lastTransitionTime":"2026-01-27T13:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:25 crc kubenswrapper[4786]: I0127 13:07:25.930516 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:25Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.022763 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.022795 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.022805 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.022824 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.022838 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:26Z","lastTransitionTime":"2026-01-27T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.125456 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.125515 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.125524 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.125542 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.125554 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:26Z","lastTransitionTime":"2026-01-27T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.227844 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.227888 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.227900 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.227915 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.227927 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:26Z","lastTransitionTime":"2026-01-27T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.330355 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.330397 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.330405 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.330417 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.330427 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:26Z","lastTransitionTime":"2026-01-27T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.432509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.432554 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.432566 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.432584 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.432596 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:26Z","lastTransitionTime":"2026-01-27T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.433172 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 20:56:39.324521537 +0000 UTC Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.464531 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.464638 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:26 crc kubenswrapper[4786]: E0127 13:07:26.464738 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.464753 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:26 crc kubenswrapper[4786]: E0127 13:07:26.464835 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:26 crc kubenswrapper[4786]: E0127 13:07:26.464993 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.535277 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.535320 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.535331 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.535350 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.535361 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:26Z","lastTransitionTime":"2026-01-27T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.637754 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.637796 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.637805 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.637820 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.637830 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:26Z","lastTransitionTime":"2026-01-27T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.740832 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.740889 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.740902 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.740923 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.740937 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:26Z","lastTransitionTime":"2026-01-27T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.843269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.843314 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.843323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.843343 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.843354 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:26Z","lastTransitionTime":"2026-01-27T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.946954 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.947045 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.947058 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.947094 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:26 crc kubenswrapper[4786]: I0127 13:07:26.947114 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:26Z","lastTransitionTime":"2026-01-27T13:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.049323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.049365 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.049380 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.049399 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.049411 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:27Z","lastTransitionTime":"2026-01-27T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.151678 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.151730 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.151740 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.151754 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.151764 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:27Z","lastTransitionTime":"2026-01-27T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.254932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.254977 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.254987 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.255003 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.255014 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:27Z","lastTransitionTime":"2026-01-27T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.357694 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.357742 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.357752 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.357765 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.357774 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:27Z","lastTransitionTime":"2026-01-27T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.433351 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:29:57.528216384 +0000 UTC Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.459576 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.459627 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.459637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.459650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.459660 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:27Z","lastTransitionTime":"2026-01-27T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.476410 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.486467 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.500381 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.514397 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.528965 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.543437 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.562024 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.562066 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.562078 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.562107 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.562121 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:27Z","lastTransitionTime":"2026-01-27T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.574500 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.589864 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.605940 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.618023 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.649415 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.666887 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.667104 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.667149 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.667173 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.667190 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.667202 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:27Z","lastTransitionTime":"2026-01-27T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.682790 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.699088 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.704524 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerStarted","Data":"753c87e7290b08993b5eca9fe5237b5c5a38455f806b74fe1751737dfd70dc8c"} Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.705007 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.705077 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.717485 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.734992 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.736914 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.739242 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.752948 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.767030 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.770375 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.770402 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.770410 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.770427 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.770438 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:27Z","lastTransitionTime":"2026-01-27T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.781229 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.794980 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.806869 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.827207 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.841776 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.856538 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.868931 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.872677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.872711 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.872721 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.872735 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.872745 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:27Z","lastTransitionTime":"2026-01-27T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.890537 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://753c87e7290b08993b5eca9fe5237b5c5a38455f806b74fe1751737dfd70dc8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.902771 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.914160 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.930344 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.941742 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.959578 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.975055 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.976410 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.976458 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.976494 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.976515 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.976528 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:27Z","lastTransitionTime":"2026-01-27T13:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:27 crc kubenswrapper[4786]: I0127 13:07:27.989436 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.001709 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:27Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.031850 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://753c87e7290b08993b5eca9fe5237b5c5a38455f806b74fe1751737dfd70dc8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.046958 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.060531 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.076765 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.079881 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.079963 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.079985 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.080010 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.080029 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:28Z","lastTransitionTime":"2026-01-27T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.092595 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.106296 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.118661 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.133080 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.147125 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.165659 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.178107 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:28Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.182956 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.183112 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.183173 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.183236 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.183302 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:28Z","lastTransitionTime":"2026-01-27T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.286399 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.286695 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.286932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.287032 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.287115 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:28Z","lastTransitionTime":"2026-01-27T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.390007 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.390055 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.390074 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.390098 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.390118 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:28Z","lastTransitionTime":"2026-01-27T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.434106 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:31:15.84583814 +0000 UTC Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.464542 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:28 crc kubenswrapper[4786]: E0127 13:07:28.464760 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.464850 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.465109 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:28 crc kubenswrapper[4786]: E0127 13:07:28.465248 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:28 crc kubenswrapper[4786]: E0127 13:07:28.465351 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.492667 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.492944 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.493062 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.493153 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.493227 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:28Z","lastTransitionTime":"2026-01-27T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.600196 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.600297 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.600317 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.600352 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.600371 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:28Z","lastTransitionTime":"2026-01-27T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.703395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.703426 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.703434 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.703448 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.703460 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:28Z","lastTransitionTime":"2026-01-27T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.707469 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.806649 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.806702 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.806715 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.806740 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.806755 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:28Z","lastTransitionTime":"2026-01-27T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.909729 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.909970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.910031 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.910092 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:28 crc kubenswrapper[4786]: I0127 13:07:28.910146 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:28Z","lastTransitionTime":"2026-01-27T13:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.012278 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.012575 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.012676 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.012759 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.012844 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:29Z","lastTransitionTime":"2026-01-27T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.115546 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.115931 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.115943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.115957 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.115966 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:29Z","lastTransitionTime":"2026-01-27T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.218145 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.218194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.218207 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.218223 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.218235 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:29Z","lastTransitionTime":"2026-01-27T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.320132 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.320168 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.320176 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.320189 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.320200 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:29Z","lastTransitionTime":"2026-01-27T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.421804 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.421869 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.421880 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.421899 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.421911 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:29Z","lastTransitionTime":"2026-01-27T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.434511 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 01:36:49.101245337 +0000 UTC Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.525329 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.525368 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.525380 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.525401 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.525414 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:29Z","lastTransitionTime":"2026-01-27T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.628005 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.628044 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.628055 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.628072 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.628084 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:29Z","lastTransitionTime":"2026-01-27T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.710499 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.730429 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.730473 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.730483 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.730498 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.730509 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:29Z","lastTransitionTime":"2026-01-27T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.832456 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.832503 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.832513 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.832531 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.832546 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:29Z","lastTransitionTime":"2026-01-27T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.935168 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.935217 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.935229 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.935247 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:29 crc kubenswrapper[4786]: I0127 13:07:29.935261 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:29Z","lastTransitionTime":"2026-01-27T13:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.037228 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.037260 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.037269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.037284 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.037298 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:30Z","lastTransitionTime":"2026-01-27T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.139591 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.139660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.139672 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.139689 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.139709 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:30Z","lastTransitionTime":"2026-01-27T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.242264 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.242310 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.242321 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.242354 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.242367 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:30Z","lastTransitionTime":"2026-01-27T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.344939 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.344982 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.345013 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.345030 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.345042 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:30Z","lastTransitionTime":"2026-01-27T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.434679 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:51:00.053918178 +0000 UTC Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.447848 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.447894 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.447903 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.447922 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.447932 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:30Z","lastTransitionTime":"2026-01-27T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.464155 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.464178 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.464231 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:30 crc kubenswrapper[4786]: E0127 13:07:30.464337 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:30 crc kubenswrapper[4786]: E0127 13:07:30.464439 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:30 crc kubenswrapper[4786]: E0127 13:07:30.464511 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.550509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.550557 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.550570 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.550588 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.550619 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:30Z","lastTransitionTime":"2026-01-27T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.653164 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.653236 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.653248 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.653265 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.653278 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:30Z","lastTransitionTime":"2026-01-27T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.755101 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.755139 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.755197 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.755218 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.755230 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:30Z","lastTransitionTime":"2026-01-27T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.858465 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.858514 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.858524 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.858540 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.858549 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:30Z","lastTransitionTime":"2026-01-27T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.961692 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.961771 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.961783 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.961803 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:30 crc kubenswrapper[4786]: I0127 13:07:30.961817 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:30Z","lastTransitionTime":"2026-01-27T13:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.064554 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.064596 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.064651 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.064667 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.064676 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:31Z","lastTransitionTime":"2026-01-27T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.167706 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.167744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.167753 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.167766 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.167774 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:31Z","lastTransitionTime":"2026-01-27T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.269753 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.269801 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.269816 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.269833 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.269844 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:31Z","lastTransitionTime":"2026-01-27T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.372594 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.372648 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.372657 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.372673 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.372685 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:31Z","lastTransitionTime":"2026-01-27T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.435696 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:57:21.312866412 +0000 UTC Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.474709 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.474753 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.474765 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.474780 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.474792 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:31Z","lastTransitionTime":"2026-01-27T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.475439 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth"] Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.476298 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.478170 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.484022 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.490899 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.502003 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.513539 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.524201 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.537941 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.549841 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.573194 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.577270 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.577334 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.577348 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.577389 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.577403 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:31Z","lastTransitionTime":"2026-01-27T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.586521 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.599317 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.612829 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.627917 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.639440 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fbce2942-82d4-4c43-b41d-20c66d2b0be0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rgrth\" (UID: \"fbce2942-82d4-4c43-b41d-20c66d2b0be0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.639525 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-722rx\" (UniqueName: \"kubernetes.io/projected/fbce2942-82d4-4c43-b41d-20c66d2b0be0-kube-api-access-722rx\") pod \"ovnkube-control-plane-749d76644c-rgrth\" (UID: \"fbce2942-82d4-4c43-b41d-20c66d2b0be0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.639752 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fbce2942-82d4-4c43-b41d-20c66d2b0be0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rgrth\" (UID: \"fbce2942-82d4-4c43-b41d-20c66d2b0be0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.639879 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fbce2942-82d4-4c43-b41d-20c66d2b0be0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rgrth\" (UID: \"fbce2942-82d4-4c43-b41d-20c66d2b0be0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.661495 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://753c87e7290b08993b5eca9fe5237b5c5a38455f806b74fe1751737dfd70dc8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.679756 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.679821 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.679835 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.679852 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.679864 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:31Z","lastTransitionTime":"2026-01-27T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.682653 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.701178 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.716434 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovnkube-controller/0.log" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.719115 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.719651 4786 generic.go:334] "Generic (PLEG): container finished" podID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerID="753c87e7290b08993b5eca9fe5237b5c5a38455f806b74fe1751737dfd70dc8c" exitCode=1 Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.719696 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerDied","Data":"753c87e7290b08993b5eca9fe5237b5c5a38455f806b74fe1751737dfd70dc8c"} Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.720453 4786 scope.go:117] "RemoveContainer" containerID="753c87e7290b08993b5eca9fe5237b5c5a38455f806b74fe1751737dfd70dc8c" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.728645 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.735500 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.740427 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fbce2942-82d4-4c43-b41d-20c66d2b0be0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rgrth\" (UID: \"fbce2942-82d4-4c43-b41d-20c66d2b0be0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.740471 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fbce2942-82d4-4c43-b41d-20c66d2b0be0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rgrth\" (UID: \"fbce2942-82d4-4c43-b41d-20c66d2b0be0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.740503 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-722rx\" (UniqueName: \"kubernetes.io/projected/fbce2942-82d4-4c43-b41d-20c66d2b0be0-kube-api-access-722rx\") pod \"ovnkube-control-plane-749d76644c-rgrth\" (UID: \"fbce2942-82d4-4c43-b41d-20c66d2b0be0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.740540 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fbce2942-82d4-4c43-b41d-20c66d2b0be0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rgrth\" (UID: \"fbce2942-82d4-4c43-b41d-20c66d2b0be0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.741245 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fbce2942-82d4-4c43-b41d-20c66d2b0be0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rgrth\" (UID: \"fbce2942-82d4-4c43-b41d-20c66d2b0be0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.741262 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fbce2942-82d4-4c43-b41d-20c66d2b0be0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rgrth\" (UID: \"fbce2942-82d4-4c43-b41d-20c66d2b0be0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.749927 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.750571 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fbce2942-82d4-4c43-b41d-20c66d2b0be0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rgrth\" (UID: \"fbce2942-82d4-4c43-b41d-20c66d2b0be0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.758890 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-722rx\" (UniqueName: \"kubernetes.io/projected/fbce2942-82d4-4c43-b41d-20c66d2b0be0-kube-api-access-722rx\") pod \"ovnkube-control-plane-749d76644c-rgrth\" (UID: \"fbce2942-82d4-4c43-b41d-20c66d2b0be0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.770049 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.782362 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.782396 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.782404 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.782420 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.782430 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:31Z","lastTransitionTime":"2026-01-27T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.783448 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.793595 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.796676 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.806312 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.824761 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://753c87e7290b08993b5eca9fe5237b5c5a38455f806b74fe1751737dfd70dc8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://753c87e7290b08993b5eca9fe5237b5c5a38455f806b74fe1751737dfd70dc8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"message\\\":\\\"1.069744 6101 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 13:07:31.070253 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 13:07:31.071695 6101 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 13:07:31.071714 6101 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 13:07:31.071761 6101 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 13:07:31.071775 6101 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 13:07:31.071798 6101 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 13:07:31.071812 6101 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 13:07:31.071817 6101 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 13:07:31.071868 6101 factory.go:656] Stopping watch factory\\\\nI0127 13:07:31.071884 6101 ovnkube.go:599] Stopped ovnkube\\\\nI0127 13:07:31.071918 6101 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 13:07:31.071948 6101 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 13:07:31.071976 6101 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 13:07:31.072025 6101 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 13:07:31.072055 6101 handler.go:208] Removed *v1.EgressIP event ha\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.837472 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.851258 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.862920 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.876846 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.886487 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.886535 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.886581 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.886620 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.886651 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:31Z","lastTransitionTime":"2026-01-27T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.889278 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.901621 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.917183 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.931075 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.944807 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.959704 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:31Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.989887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.989933 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.989944 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.989966 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:31 crc kubenswrapper[4786]: I0127 13:07:31.989979 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:31Z","lastTransitionTime":"2026-01-27T13:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.092288 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.092636 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.092647 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.092664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.092675 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:32Z","lastTransitionTime":"2026-01-27T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.194872 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.194911 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.194919 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.194932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.194942 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:32Z","lastTransitionTime":"2026-01-27T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.201798 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-8jf77"] Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.202258 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.202321 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.220169 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://753c87e7290b08993b5eca9fe5237b5c5a38455f806b74fe1751737dfd70dc8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://753c87e7290b08993b5eca9fe5237b5c5a38455f806b74fe1751737dfd70dc8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"message\\\":\\\"1.069744 6101 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 13:07:31.070253 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 13:07:31.071695 6101 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 13:07:31.071714 6101 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 13:07:31.071761 6101 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 13:07:31.071775 6101 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 13:07:31.071798 6101 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 13:07:31.071812 6101 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 13:07:31.071817 6101 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 13:07:31.071868 6101 factory.go:656] Stopping watch factory\\\\nI0127 13:07:31.071884 6101 ovnkube.go:599] Stopped ovnkube\\\\nI0127 13:07:31.071918 6101 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 13:07:31.071948 6101 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 13:07:31.071976 6101 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 13:07:31.072025 6101 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 13:07:31.072055 6101 handler.go:208] Removed *v1.EgressIP event ha\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.235284 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.246346 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.246475 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:07:48.246451291 +0000 UTC m=+51.457065410 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.246546 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.246629 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.246660 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.246752 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.246794 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.246790 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.246869 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.246902 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.246943 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:48.246933384 +0000 UTC m=+51.457547503 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.247000 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.247016 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.247033 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.246836 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.247095 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:48.247036337 +0000 UTC m=+51.457650466 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.247143 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:48.247133369 +0000 UTC m=+51.457747588 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.247167 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:48.24715237 +0000 UTC m=+51.457766699 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.256482 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.281087 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.297472 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.297549 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.297574 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.297636 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.297662 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:32Z","lastTransitionTime":"2026-01-27T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.298546 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.315394 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.332211 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.344490 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.347704 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs\") pod \"network-metrics-daemon-8jf77\" (UID: \"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\") " pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.347790 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fswjw\" (UniqueName: \"kubernetes.io/projected/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-kube-api-access-fswjw\") pod \"network-metrics-daemon-8jf77\" (UID: \"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\") " pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.361635 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.379759 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.390114 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.400798 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.400951 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.401034 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.401125 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.401250 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:32Z","lastTransitionTime":"2026-01-27T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.404991 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.424624 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.436638 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 21:02:49.263090566 +0000 UTC Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.437576 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.449124 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fswjw\" (UniqueName: \"kubernetes.io/projected/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-kube-api-access-fswjw\") pod \"network-metrics-daemon-8jf77\" (UID: \"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\") " pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.449192 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs\") pod \"network-metrics-daemon-8jf77\" (UID: \"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\") " pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.449390 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.449493 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs podName:a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:32.949471069 +0000 UTC m=+36.160085188 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs") pod "network-metrics-daemon-8jf77" (UID: "a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.455911 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.464662 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.464695 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.464815 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.464860 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.464952 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.465017 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.466494 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.468471 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fswjw\" (UniqueName: \"kubernetes.io/projected/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-kube-api-access-fswjw\") pod \"network-metrics-daemon-8jf77\" (UID: \"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\") " pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.478281 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.504075 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.504108 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.504117 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.504130 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.504137 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:32Z","lastTransitionTime":"2026-01-27T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.606670 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.606717 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.606728 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.606745 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.606757 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:32Z","lastTransitionTime":"2026-01-27T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.709897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.709952 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.709966 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.709987 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.710000 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:32Z","lastTransitionTime":"2026-01-27T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.724848 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovnkube-controller/1.log" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.725468 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovnkube-controller/0.log" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.728864 4786 generic.go:334] "Generic (PLEG): container finished" podID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerID="2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a" exitCode=1 Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.728929 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerDied","Data":"2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.729197 4786 scope.go:117] "RemoveContainer" containerID="753c87e7290b08993b5eca9fe5237b5c5a38455f806b74fe1751737dfd70dc8c" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.730035 4786 scope.go:117] "RemoveContainer" containerID="2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a" Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.730231 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.731810 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" event={"ID":"fbce2942-82d4-4c43-b41d-20c66d2b0be0","Type":"ContainerStarted","Data":"cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.731860 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" event={"ID":"fbce2942-82d4-4c43-b41d-20c66d2b0be0","Type":"ContainerStarted","Data":"4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.731877 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" event={"ID":"fbce2942-82d4-4c43-b41d-20c66d2b0be0","Type":"ContainerStarted","Data":"fd19f2a2f059324cbbc7eaeba569df17dffad7906a9bfffec6bf2c61cac5bef1"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.746596 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.760571 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.778737 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.792901 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.806056 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.812442 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.812502 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.812514 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.812528 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.812540 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:32Z","lastTransitionTime":"2026-01-27T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.823598 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.838847 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.853143 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.863494 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.873863 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.888549 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.902205 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.902517 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.902635 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.902738 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.902827 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:32Z","lastTransitionTime":"2026-01-27T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.904703 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.915374 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.921485 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.922896 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.922937 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.922946 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.922960 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.922969 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:32Z","lastTransitionTime":"2026-01-27T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.933110 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.935001 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.938678 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.938724 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.938732 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.938745 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.938753 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:32Z","lastTransitionTime":"2026-01-27T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.947802 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.950299 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.953876 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.953919 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.953929 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.953940 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.953948 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:32Z","lastTransitionTime":"2026-01-27T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.954325 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs\") pod \"network-metrics-daemon-8jf77\" (UID: \"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\") " pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.954513 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.954578 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs podName:a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:33.954560466 +0000 UTC m=+37.165174595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs") pod "network-metrics-daemon-8jf77" (UID: "a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.967030 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.969100 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.971349 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.971377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.971385 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.971396 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.971404 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:32Z","lastTransitionTime":"2026-01-27T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.983664 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:32 crc kubenswrapper[4786]: E0127 13:07:32.983780 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.985459 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.985501 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.985513 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.985530 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.985543 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:32Z","lastTransitionTime":"2026-01-27T13:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:32 crc kubenswrapper[4786]: I0127 13:07:32.993392 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://753c87e7290b08993b5eca9fe5237b5c5a38455f806b74fe1751737dfd70dc8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"message\\\":\\\"1.069744 6101 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 13:07:31.070253 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 13:07:31.071695 6101 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 13:07:31.071714 6101 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 13:07:31.071761 6101 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 13:07:31.071775 6101 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 13:07:31.071798 6101 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 13:07:31.071812 6101 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 13:07:31.071817 6101 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 13:07:31.071868 6101 factory.go:656] Stopping watch factory\\\\nI0127 13:07:31.071884 6101 ovnkube.go:599] Stopped ovnkube\\\\nI0127 13:07:31.071918 6101 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 13:07:31.071948 6101 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 13:07:31.071976 6101 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 13:07:31.072025 6101 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 13:07:31.072055 6101 handler.go:208] Removed *v1.EgressIP event ha\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"ontroller openshift-machine-config-operator aa30290d-3a39-43ba-a212-6439bd680987 4486 0 2025-02-23 05:12:25 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:machine-config-controller] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b93ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:32Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.017061 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://753c87e7290b08993b5eca9fe5237b5c5a38455f806b74fe1751737dfd70dc8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"message\\\":\\\"1.069744 6101 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 13:07:31.070253 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 13:07:31.071695 6101 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 13:07:31.071714 6101 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 13:07:31.071761 6101 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 13:07:31.071775 6101 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 13:07:31.071798 6101 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 13:07:31.071812 6101 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 13:07:31.071817 6101 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 13:07:31.071868 6101 factory.go:656] Stopping watch factory\\\\nI0127 13:07:31.071884 6101 ovnkube.go:599] Stopped ovnkube\\\\nI0127 13:07:31.071918 6101 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 13:07:31.071948 6101 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 13:07:31.071976 6101 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 13:07:31.072025 6101 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 13:07:31.072055 6101 handler.go:208] Removed *v1.EgressIP event ha\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"ontroller openshift-machine-config-operator aa30290d-3a39-43ba-a212-6439bd680987 4486 0 2025-02-23 05:12:25 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:machine-config-controller] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b93ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.036556 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.049330 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.065300 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.079656 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.088339 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.088392 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.088405 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.088424 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.088439 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:33Z","lastTransitionTime":"2026-01-27T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.092752 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.107994 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.122514 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.137067 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.155309 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.168156 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.184557 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.191495 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.191544 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.191557 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.191578 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.191591 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:33Z","lastTransitionTime":"2026-01-27T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.198479 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.213439 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.238625 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.255566 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.272315 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:33Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.294618 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.294893 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.295170 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.295372 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.295569 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:33Z","lastTransitionTime":"2026-01-27T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.397726 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.398071 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.398324 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.398532 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.398765 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:33Z","lastTransitionTime":"2026-01-27T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.437972 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 09:22:58.336083251 +0000 UTC Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.475433 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:33 crc kubenswrapper[4786]: E0127 13:07:33.475675 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.500949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.500987 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.500996 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.501009 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.501018 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:33Z","lastTransitionTime":"2026-01-27T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.603842 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.603878 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.603887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.603901 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.603910 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:33Z","lastTransitionTime":"2026-01-27T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.705933 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.705979 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.705990 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.706029 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.706040 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:33Z","lastTransitionTime":"2026-01-27T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.738054 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovnkube-controller/1.log" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.809632 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.809807 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.809975 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.810105 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.810191 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:33Z","lastTransitionTime":"2026-01-27T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.913162 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.913435 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.913705 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.913815 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.914067 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:33Z","lastTransitionTime":"2026-01-27T13:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:33 crc kubenswrapper[4786]: I0127 13:07:33.965173 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs\") pod \"network-metrics-daemon-8jf77\" (UID: \"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\") " pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:33 crc kubenswrapper[4786]: E0127 13:07:33.965428 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:07:33 crc kubenswrapper[4786]: E0127 13:07:33.965554 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs podName:a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:35.965525301 +0000 UTC m=+39.176139460 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs") pod "network-metrics-daemon-8jf77" (UID: "a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.018641 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.018682 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.018693 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.018711 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.018726 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:34Z","lastTransitionTime":"2026-01-27T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.122663 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.123141 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.123242 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.123339 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.123434 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:34Z","lastTransitionTime":"2026-01-27T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.226412 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.226475 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.226489 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.226510 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.226526 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:34Z","lastTransitionTime":"2026-01-27T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.330004 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.330062 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.330076 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.330095 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.330111 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:34Z","lastTransitionTime":"2026-01-27T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.432903 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.432940 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.432950 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.432966 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.432976 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:34Z","lastTransitionTime":"2026-01-27T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.439226 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:36:43.642161222 +0000 UTC Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.464719 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.464781 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.464866 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:34 crc kubenswrapper[4786]: E0127 13:07:34.464903 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:34 crc kubenswrapper[4786]: E0127 13:07:34.464996 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:34 crc kubenswrapper[4786]: E0127 13:07:34.465080 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.535199 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.535246 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.535274 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.535288 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.535299 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:34Z","lastTransitionTime":"2026-01-27T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.638101 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.638405 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.638513 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.638642 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.638717 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:34Z","lastTransitionTime":"2026-01-27T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.741500 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.741540 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.741549 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.741563 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.741574 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:34Z","lastTransitionTime":"2026-01-27T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.844166 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.844206 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.844220 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.844236 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.844247 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:34Z","lastTransitionTime":"2026-01-27T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.946978 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.947009 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.947017 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.947029 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:34 crc kubenswrapper[4786]: I0127 13:07:34.947037 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:34Z","lastTransitionTime":"2026-01-27T13:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.049413 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.049459 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.049469 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.049490 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.049501 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:35Z","lastTransitionTime":"2026-01-27T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.152102 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.152141 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.152152 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.152171 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.152183 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:35Z","lastTransitionTime":"2026-01-27T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.254650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.254691 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.254700 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.254715 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.254729 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:35Z","lastTransitionTime":"2026-01-27T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.356908 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.356951 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.356962 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.356975 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.356985 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:35Z","lastTransitionTime":"2026-01-27T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.439866 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:58:54.937470352 +0000 UTC Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.459441 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.459483 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.459491 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.459506 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.459517 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:35Z","lastTransitionTime":"2026-01-27T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.464807 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:35 crc kubenswrapper[4786]: E0127 13:07:35.465053 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.562019 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.562269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.562363 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.562452 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.562528 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:35Z","lastTransitionTime":"2026-01-27T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.665225 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.665474 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.665592 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.665829 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.665999 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:35Z","lastTransitionTime":"2026-01-27T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.768331 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.768642 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.768756 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.768857 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.768952 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:35Z","lastTransitionTime":"2026-01-27T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.872309 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.872513 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.872703 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.872845 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.872906 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:35Z","lastTransitionTime":"2026-01-27T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.976082 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.976325 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.976639 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.976750 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.976826 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:35Z","lastTransitionTime":"2026-01-27T13:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:35 crc kubenswrapper[4786]: I0127 13:07:35.984700 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs\") pod \"network-metrics-daemon-8jf77\" (UID: \"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\") " pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:35 crc kubenswrapper[4786]: E0127 13:07:35.984889 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:07:35 crc kubenswrapper[4786]: E0127 13:07:35.985062 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs podName:a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:39.985046528 +0000 UTC m=+43.195660647 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs") pod "network-metrics-daemon-8jf77" (UID: "a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.080660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.080731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.080749 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.080777 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.080798 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:36Z","lastTransitionTime":"2026-01-27T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.183500 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.183857 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.184011 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.184146 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.184377 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:36Z","lastTransitionTime":"2026-01-27T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.287490 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.287558 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.287571 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.287633 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.287648 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:36Z","lastTransitionTime":"2026-01-27T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.390478 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.390538 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.390550 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.390569 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.390581 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:36Z","lastTransitionTime":"2026-01-27T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.440189 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 04:41:27.741656346 +0000 UTC Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.464842 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.464890 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.464866 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:36 crc kubenswrapper[4786]: E0127 13:07:36.465053 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:36 crc kubenswrapper[4786]: E0127 13:07:36.465644 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:36 crc kubenswrapper[4786]: E0127 13:07:36.465677 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.492584 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.492637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.492650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.492664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.492673 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:36Z","lastTransitionTime":"2026-01-27T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.595263 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.595650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.595662 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.595677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.595689 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:36Z","lastTransitionTime":"2026-01-27T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.699001 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.699050 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.699064 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.699083 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.699096 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:36Z","lastTransitionTime":"2026-01-27T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.802158 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.802198 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.802210 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.802226 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.802236 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:36Z","lastTransitionTime":"2026-01-27T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.904195 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.904249 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.904260 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.904277 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:36 crc kubenswrapper[4786]: I0127 13:07:36.904289 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:36Z","lastTransitionTime":"2026-01-27T13:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.008008 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.008360 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.008444 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.008558 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.008650 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:37Z","lastTransitionTime":"2026-01-27T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.111664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.111705 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.111716 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.111736 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.111748 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:37Z","lastTransitionTime":"2026-01-27T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.216069 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.216156 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.216177 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.216206 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.216232 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:37Z","lastTransitionTime":"2026-01-27T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.320011 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.320086 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.320106 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.320133 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.320152 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:37Z","lastTransitionTime":"2026-01-27T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.423063 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.423105 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.423115 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.423131 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.423140 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:37Z","lastTransitionTime":"2026-01-27T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.440656 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:14:44.72702792 +0000 UTC Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.464276 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:37 crc kubenswrapper[4786]: E0127 13:07:37.464514 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.479539 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.496549 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.510674 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.525530 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.525572 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.525583 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.525598 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.525628 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:37Z","lastTransitionTime":"2026-01-27T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.528840 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.541743 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.556631 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.569962 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.587862 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.602104 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.614745 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.625739 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.627829 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.627864 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.627875 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.627889 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.627899 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:37Z","lastTransitionTime":"2026-01-27T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.638557 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.656885 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://753c87e7290b08993b5eca9fe5237b5c5a38455f806b74fe1751737dfd70dc8c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"message\\\":\\\"1.069744 6101 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 13:07:31.070253 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 13:07:31.071695 6101 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 13:07:31.071714 6101 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 13:07:31.071761 6101 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 13:07:31.071775 6101 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 13:07:31.071798 6101 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 13:07:31.071812 6101 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 13:07:31.071817 6101 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 13:07:31.071868 6101 factory.go:656] Stopping watch factory\\\\nI0127 13:07:31.071884 6101 ovnkube.go:599] Stopped ovnkube\\\\nI0127 13:07:31.071918 6101 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 13:07:31.071948 6101 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 13:07:31.071976 6101 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 13:07:31.072025 6101 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 13:07:31.072055 6101 handler.go:208] Removed *v1.EgressIP event ha\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"ontroller openshift-machine-config-operator aa30290d-3a39-43ba-a212-6439bd680987 4486 0 2025-02-23 05:12:25 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:machine-config-controller] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b93ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.669718 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.680537 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.694086 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.706762 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:37Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.729965 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.730141 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.730233 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.730337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.730422 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:37Z","lastTransitionTime":"2026-01-27T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.832814 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.832858 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.832877 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.832894 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.832904 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:37Z","lastTransitionTime":"2026-01-27T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.936752 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.936810 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.936831 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.936856 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:37 crc kubenswrapper[4786]: I0127 13:07:37.936876 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:37Z","lastTransitionTime":"2026-01-27T13:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.039121 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.039160 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.039170 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.039187 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.039196 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:38Z","lastTransitionTime":"2026-01-27T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.141919 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.142205 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.142284 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.142354 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.142412 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:38Z","lastTransitionTime":"2026-01-27T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.244566 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.244867 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.244941 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.245008 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.245137 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:38Z","lastTransitionTime":"2026-01-27T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.347330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.347411 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.347432 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.347465 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.347492 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:38Z","lastTransitionTime":"2026-01-27T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.441661 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:17:10.120722864 +0000 UTC Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.449829 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.450083 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.450186 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.450255 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.450310 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:38Z","lastTransitionTime":"2026-01-27T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.464866 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.464963 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:38 crc kubenswrapper[4786]: E0127 13:07:38.465211 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:38 crc kubenswrapper[4786]: E0127 13:07:38.465342 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.464961 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:38 crc kubenswrapper[4786]: E0127 13:07:38.465426 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.552500 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.552909 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.552978 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.553055 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.553230 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:38Z","lastTransitionTime":"2026-01-27T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.655674 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.655931 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.656022 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.656109 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.656195 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:38Z","lastTransitionTime":"2026-01-27T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.759091 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.759133 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.759142 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.759177 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.759188 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:38Z","lastTransitionTime":"2026-01-27T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.861992 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.862105 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.862173 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.862211 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.862277 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:38Z","lastTransitionTime":"2026-01-27T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.965582 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.965676 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.965686 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.965723 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:38 crc kubenswrapper[4786]: I0127 13:07:38.965736 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:38Z","lastTransitionTime":"2026-01-27T13:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.069422 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.069478 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.069491 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.069513 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.069796 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:39Z","lastTransitionTime":"2026-01-27T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.172854 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.172948 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.172958 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.172978 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.172990 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:39Z","lastTransitionTime":"2026-01-27T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.276718 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.276991 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.277104 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.277182 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.277251 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:39Z","lastTransitionTime":"2026-01-27T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.380497 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.380564 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.380593 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.380659 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.380679 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:39Z","lastTransitionTime":"2026-01-27T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.441846 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:12:03.601777227 +0000 UTC Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.464658 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:39 crc kubenswrapper[4786]: E0127 13:07:39.464964 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.484330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.484483 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.484507 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.484540 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.484564 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:39Z","lastTransitionTime":"2026-01-27T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.588179 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.588263 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.588274 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.588288 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.588299 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:39Z","lastTransitionTime":"2026-01-27T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.691581 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.691639 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.691651 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.691666 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.691684 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:39Z","lastTransitionTime":"2026-01-27T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.794286 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.794355 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.794374 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.794406 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.794427 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:39Z","lastTransitionTime":"2026-01-27T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.898571 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.898633 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.898649 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.898673 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:39 crc kubenswrapper[4786]: I0127 13:07:39.898688 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:39Z","lastTransitionTime":"2026-01-27T13:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.001669 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.001729 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.001747 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.001774 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.001790 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:40Z","lastTransitionTime":"2026-01-27T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.031730 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs\") pod \"network-metrics-daemon-8jf77\" (UID: \"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\") " pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:40 crc kubenswrapper[4786]: E0127 13:07:40.031909 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:07:40 crc kubenswrapper[4786]: E0127 13:07:40.032003 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs podName:a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496 nodeName:}" failed. No retries permitted until 2026-01-27 13:07:48.031981583 +0000 UTC m=+51.242595702 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs") pod "network-metrics-daemon-8jf77" (UID: "a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.106258 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.106440 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.106475 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.106559 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.106583 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:40Z","lastTransitionTime":"2026-01-27T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.209938 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.210009 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.210022 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.210046 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.210086 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:40Z","lastTransitionTime":"2026-01-27T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.312567 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.312627 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.312642 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.312659 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.312670 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:40Z","lastTransitionTime":"2026-01-27T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.416233 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.416469 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.416572 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.416671 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.416746 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:40Z","lastTransitionTime":"2026-01-27T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.442066 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:19:30.376787085 +0000 UTC Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.464476 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:40 crc kubenswrapper[4786]: E0127 13:07:40.464659 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.464703 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.464505 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:40 crc kubenswrapper[4786]: E0127 13:07:40.464968 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:40 crc kubenswrapper[4786]: E0127 13:07:40.465160 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.519877 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.519913 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.519922 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.519955 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.519967 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:40Z","lastTransitionTime":"2026-01-27T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.622799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.622841 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.622856 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.622876 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.622890 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:40Z","lastTransitionTime":"2026-01-27T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.625066 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.626414 4786 scope.go:117] "RemoveContainer" containerID="2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a" Jan 27 13:07:40 crc kubenswrapper[4786]: E0127 13:07:40.626686 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.646305 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.664794 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.678384 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.688646 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.725194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.725450 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.725541 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.725648 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.725966 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:40Z","lastTransitionTime":"2026-01-27T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.756322 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.779509 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"ontroller openshift-machine-config-operator aa30290d-3a39-43ba-a212-6439bd680987 4486 0 2025-02-23 05:12:25 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:machine-config-controller] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b93ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.794027 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.804788 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.819050 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.827954 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.827995 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.828007 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.828026 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.828044 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:40Z","lastTransitionTime":"2026-01-27T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.831771 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.850478 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.863050 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.877748 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.889736 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.902269 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.913665 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.925248 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:40Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.929757 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.929787 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.929825 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.929842 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:40 crc kubenswrapper[4786]: I0127 13:07:40.929853 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:40Z","lastTransitionTime":"2026-01-27T13:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.032314 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.032362 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.032374 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.032391 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.032402 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:41Z","lastTransitionTime":"2026-01-27T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.134338 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.134377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.134385 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.134399 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.134408 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:41Z","lastTransitionTime":"2026-01-27T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.237330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.237592 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.237688 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.237760 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.237878 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:41Z","lastTransitionTime":"2026-01-27T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.340019 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.340064 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.340078 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.340093 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.340104 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:41Z","lastTransitionTime":"2026-01-27T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.442290 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 06:15:34.708192232 +0000 UTC Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.442870 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.442926 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.442943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.442972 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.442991 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:41Z","lastTransitionTime":"2026-01-27T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.464632 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:41 crc kubenswrapper[4786]: E0127 13:07:41.464865 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.545756 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.546118 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.546317 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.546471 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.546596 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:41Z","lastTransitionTime":"2026-01-27T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.649520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.649592 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.649631 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.649660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.649677 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:41Z","lastTransitionTime":"2026-01-27T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.752376 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.752427 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.752440 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.752461 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.752474 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:41Z","lastTransitionTime":"2026-01-27T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.854838 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.854895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.854908 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.854931 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.854945 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:41Z","lastTransitionTime":"2026-01-27T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.957671 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.957717 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.957730 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.957748 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:41 crc kubenswrapper[4786]: I0127 13:07:41.957761 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:41Z","lastTransitionTime":"2026-01-27T13:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.059761 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.060122 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.060231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.060400 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.060512 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:42Z","lastTransitionTime":"2026-01-27T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.163118 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.163379 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.163546 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.163697 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.163838 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:42Z","lastTransitionTime":"2026-01-27T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.266625 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.266890 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.266970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.267050 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.267139 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:42Z","lastTransitionTime":"2026-01-27T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.369245 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.369291 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.369303 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.369323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.369336 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:42Z","lastTransitionTime":"2026-01-27T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.443351 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 11:17:04.962257609 +0000 UTC Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.464845 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.464845 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:42 crc kubenswrapper[4786]: E0127 13:07:42.464981 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:42 crc kubenswrapper[4786]: E0127 13:07:42.465069 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.465378 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:42 crc kubenswrapper[4786]: E0127 13:07:42.465633 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.471947 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.471980 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.471989 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.472004 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.472014 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:42Z","lastTransitionTime":"2026-01-27T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.574449 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.574480 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.574488 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.574502 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.574510 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:42Z","lastTransitionTime":"2026-01-27T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.676712 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.677076 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.677165 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.677256 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.677342 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:42Z","lastTransitionTime":"2026-01-27T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.779955 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.779985 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.779993 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.780005 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.780014 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:42Z","lastTransitionTime":"2026-01-27T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.882701 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.882742 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.882751 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.882766 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.882778 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:42Z","lastTransitionTime":"2026-01-27T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.985930 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.985975 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.985984 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.986004 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:42 crc kubenswrapper[4786]: I0127 13:07:42.986014 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:42Z","lastTransitionTime":"2026-01-27T13:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.088349 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.088412 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.088424 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.088442 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.088453 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:43Z","lastTransitionTime":"2026-01-27T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.190979 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.191293 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.191359 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.191456 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.191515 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:43Z","lastTransitionTime":"2026-01-27T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.292637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.292951 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.293063 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.293158 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.293233 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:43Z","lastTransitionTime":"2026-01-27T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:43 crc kubenswrapper[4786]: E0127 13:07:43.311206 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:43Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.315268 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.315313 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.315345 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.315361 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.315372 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:43Z","lastTransitionTime":"2026-01-27T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:43 crc kubenswrapper[4786]: E0127 13:07:43.328527 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:43Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.332414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.332454 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.332463 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.332477 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.332486 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:43Z","lastTransitionTime":"2026-01-27T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:43 crc kubenswrapper[4786]: E0127 13:07:43.346347 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:43Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.350223 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.350249 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.350259 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.350275 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.350304 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:43Z","lastTransitionTime":"2026-01-27T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:43 crc kubenswrapper[4786]: E0127 13:07:43.364784 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:43Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.368320 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.368371 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.368382 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.368402 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.368415 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:43Z","lastTransitionTime":"2026-01-27T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:43 crc kubenswrapper[4786]: E0127 13:07:43.380726 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:43Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:43 crc kubenswrapper[4786]: E0127 13:07:43.380869 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.382719 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.382770 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.382782 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.382803 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.382819 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:43Z","lastTransitionTime":"2026-01-27T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.444222 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 12:19:12.412565774 +0000 UTC Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.464773 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:43 crc kubenswrapper[4786]: E0127 13:07:43.464927 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.484626 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.484656 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.484665 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.484676 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.484685 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:43Z","lastTransitionTime":"2026-01-27T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.587532 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.587586 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.587615 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.587638 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.587652 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:43Z","lastTransitionTime":"2026-01-27T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.690889 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.690960 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.690974 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.690990 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.691019 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:43Z","lastTransitionTime":"2026-01-27T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.793480 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.793544 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.793566 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.793594 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.793662 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:43Z","lastTransitionTime":"2026-01-27T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.896689 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.896751 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.896762 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.896777 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:43 crc kubenswrapper[4786]: I0127 13:07:43.896789 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:43Z","lastTransitionTime":"2026-01-27T13:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.000256 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.000315 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.000327 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.000354 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.000365 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:44Z","lastTransitionTime":"2026-01-27T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.102191 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.102223 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.102233 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.102246 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.102255 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:44Z","lastTransitionTime":"2026-01-27T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.205785 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.205856 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.205865 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.205880 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.205890 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:44Z","lastTransitionTime":"2026-01-27T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.308632 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.308864 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.308963 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.309038 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.309100 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:44Z","lastTransitionTime":"2026-01-27T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.411696 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.411762 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.411779 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.411802 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.411820 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:44Z","lastTransitionTime":"2026-01-27T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.445128 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:42:34.317162959 +0000 UTC Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.464224 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.464271 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:44 crc kubenswrapper[4786]: E0127 13:07:44.464469 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:44 crc kubenswrapper[4786]: E0127 13:07:44.464584 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.464827 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:44 crc kubenswrapper[4786]: E0127 13:07:44.464936 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.514331 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.514378 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.514387 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.514400 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.514409 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:44Z","lastTransitionTime":"2026-01-27T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.617258 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.617348 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.617366 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.617419 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.617438 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:44Z","lastTransitionTime":"2026-01-27T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.719775 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.719871 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.719893 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.719916 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.719933 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:44Z","lastTransitionTime":"2026-01-27T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.821923 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.821962 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.821975 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.821996 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.822012 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:44Z","lastTransitionTime":"2026-01-27T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.862355 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.871585 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.880233 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:44Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.896822 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:44Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.908586 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:44Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.920731 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:44Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.924364 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.924411 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.924421 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.924435 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.924444 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:44Z","lastTransitionTime":"2026-01-27T13:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.934786 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:44Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.947458 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:44Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.960953 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:44Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.975814 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:44Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:44 crc kubenswrapper[4786]: I0127 13:07:44.989038 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:44Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.017706 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:45Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.027107 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.027147 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.027160 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.027184 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.027195 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:45Z","lastTransitionTime":"2026-01-27T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.032443 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:45Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.048002 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:45Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.066964 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"ontroller openshift-machine-config-operator aa30290d-3a39-43ba-a212-6439bd680987 4486 0 2025-02-23 05:12:25 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:machine-config-controller] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b93ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:45Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.083774 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:45Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.097288 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:45Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.112538 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:45Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.126169 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:45Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.130413 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.130470 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.130483 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.130498 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.130507 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:45Z","lastTransitionTime":"2026-01-27T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.232766 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.232831 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.232849 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.232873 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.232890 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:45Z","lastTransitionTime":"2026-01-27T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.334900 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.334957 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.334973 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.334997 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.335014 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:45Z","lastTransitionTime":"2026-01-27T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.438793 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.438859 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.438872 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.438891 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.438904 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:45Z","lastTransitionTime":"2026-01-27T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.446118 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:16:26.545373826 +0000 UTC Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.464669 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:45 crc kubenswrapper[4786]: E0127 13:07:45.464874 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.541537 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.541590 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.541619 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.541640 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.541652 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:45Z","lastTransitionTime":"2026-01-27T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.644196 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.644252 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.644269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.644320 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.644336 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:45Z","lastTransitionTime":"2026-01-27T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.746475 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.746513 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.746525 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.746545 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.746556 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:45Z","lastTransitionTime":"2026-01-27T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.850411 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.850470 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.850487 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.850507 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.850522 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:45Z","lastTransitionTime":"2026-01-27T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.953545 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.953639 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.953665 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.953687 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:45 crc kubenswrapper[4786]: I0127 13:07:45.953702 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:45Z","lastTransitionTime":"2026-01-27T13:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.055514 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.055559 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.055569 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.055586 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.055595 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:46Z","lastTransitionTime":"2026-01-27T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.158280 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.158332 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.158342 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.158362 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.158374 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:46Z","lastTransitionTime":"2026-01-27T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.261242 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.261305 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.261323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.261348 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.261377 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:46Z","lastTransitionTime":"2026-01-27T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.365053 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.365103 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.365114 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.365130 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.365141 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:46Z","lastTransitionTime":"2026-01-27T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.447290 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 03:40:39.584236592 +0000 UTC Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.464221 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.464326 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.464501 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:46 crc kubenswrapper[4786]: E0127 13:07:46.464525 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:46 crc kubenswrapper[4786]: E0127 13:07:46.464588 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:46 crc kubenswrapper[4786]: E0127 13:07:46.464699 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.467799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.467826 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.467838 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.467854 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.467865 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:46Z","lastTransitionTime":"2026-01-27T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.570452 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.570510 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.570526 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.570555 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.570580 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:46Z","lastTransitionTime":"2026-01-27T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.674018 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.674062 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.674072 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.674086 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.674096 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:46Z","lastTransitionTime":"2026-01-27T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.777877 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.777944 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.777961 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.777982 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.777998 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:46Z","lastTransitionTime":"2026-01-27T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.880485 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.880533 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.880543 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.880562 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.880573 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:46Z","lastTransitionTime":"2026-01-27T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.983006 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.983071 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.983088 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.983112 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:46 crc kubenswrapper[4786]: I0127 13:07:46.983131 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:46Z","lastTransitionTime":"2026-01-27T13:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.086444 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.086480 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.086492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.086509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.086521 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:47Z","lastTransitionTime":"2026-01-27T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.188652 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.188691 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.188705 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.188721 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.188733 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:47Z","lastTransitionTime":"2026-01-27T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.291039 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.291078 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.291086 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.291099 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.291108 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:47Z","lastTransitionTime":"2026-01-27T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.393861 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.393918 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.393932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.393950 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.393963 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:47Z","lastTransitionTime":"2026-01-27T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.447978 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:49:13.218435698 +0000 UTC Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.464330 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:47 crc kubenswrapper[4786]: E0127 13:07:47.464489 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.478738 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.493120 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.497246 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.497316 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.497327 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.497343 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.497354 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:47Z","lastTransitionTime":"2026-01-27T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.508816 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.524343 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.540383 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.552150 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.569492 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.597093 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.600982 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.601038 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.601049 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.601069 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.601082 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:47Z","lastTransitionTime":"2026-01-27T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.613598 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.628806 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.640225 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.653756 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.667630 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bda5706f-b065-4472-a45c-baff1cca3c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://393d243fd50fbbc25df6261d62f7e4ddd7aee425cef5c4ccc0d8ff028cd34df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f3b3f74451aceef575e0385356b071170bdde56171fce36e1bdd5debdac37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823e47a91a33d234f34bb3e1529d300d469cab2bad4630ec73c30b9d669abe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.690021 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"ontroller openshift-machine-config-operator aa30290d-3a39-43ba-a212-6439bd680987 4486 0 2025-02-23 05:12:25 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:machine-config-controller] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b93ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.704242 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.704280 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.704291 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.704307 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.704318 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:47Z","lastTransitionTime":"2026-01-27T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.705506 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.721064 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.742096 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.755372 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:47Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.805921 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.805955 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.805964 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.805977 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.805987 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:47Z","lastTransitionTime":"2026-01-27T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.908235 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.908307 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.908322 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.908340 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:47 crc kubenswrapper[4786]: I0127 13:07:47.908352 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:47Z","lastTransitionTime":"2026-01-27T13:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.011891 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.011972 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.011992 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.012021 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.012045 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:48Z","lastTransitionTime":"2026-01-27T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.115034 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.115071 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.115078 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.115092 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.115101 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:48Z","lastTransitionTime":"2026-01-27T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.119186 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs\") pod \"network-metrics-daemon-8jf77\" (UID: \"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\") " pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.119434 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.119538 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs podName:a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496 nodeName:}" failed. No retries permitted until 2026-01-27 13:08:04.119510913 +0000 UTC m=+67.330125032 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs") pod "network-metrics-daemon-8jf77" (UID: "a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.218868 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.218962 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.218987 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.219016 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.219034 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:48Z","lastTransitionTime":"2026-01-27T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.320551 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.320730 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.320763 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.320782 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.320802 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.320922 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.320955 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:08:20.320903317 +0000 UTC m=+83.531517476 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.321017 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:08:20.32100134 +0000 UTC m=+83.531615499 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.321142 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.321179 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:08:20.321162214 +0000 UTC m=+83.531776323 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.321284 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.321318 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.321323 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.321342 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.321349 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.321367 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.321410 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 13:08:20.32140056 +0000 UTC m=+83.532014669 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.321448 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 13:08:20.321420851 +0000 UTC m=+83.532035100 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.322734 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.322785 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.322800 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.322826 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.322840 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:48Z","lastTransitionTime":"2026-01-27T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.426342 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.426402 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.426416 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.426435 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.426450 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:48Z","lastTransitionTime":"2026-01-27T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.449157 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:07:02.112255491 +0000 UTC Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.464852 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.465053 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.465310 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.465406 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.466085 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:48 crc kubenswrapper[4786]: E0127 13:07:48.466271 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.529460 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.529521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.529534 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.529552 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.529565 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:48Z","lastTransitionTime":"2026-01-27T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.632169 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.632202 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.632210 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.632224 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.632232 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:48Z","lastTransitionTime":"2026-01-27T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.735692 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.735764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.735782 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.735810 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.735844 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:48Z","lastTransitionTime":"2026-01-27T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.839971 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.840052 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.840078 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.840115 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.840138 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:48Z","lastTransitionTime":"2026-01-27T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.943488 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.943539 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.943551 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.943570 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:48 crc kubenswrapper[4786]: I0127 13:07:48.943584 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:48Z","lastTransitionTime":"2026-01-27T13:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.047476 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.047552 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.047570 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.047592 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.047628 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:49Z","lastTransitionTime":"2026-01-27T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.150554 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.150636 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.150654 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.150686 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.150705 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:49Z","lastTransitionTime":"2026-01-27T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.253740 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.253793 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.253809 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.253825 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.253839 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:49Z","lastTransitionTime":"2026-01-27T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.356927 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.357051 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.357082 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.357129 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.357157 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:49Z","lastTransitionTime":"2026-01-27T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.449672 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:51:14.666283518 +0000 UTC Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.459951 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.460004 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.460017 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.460036 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.460048 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:49Z","lastTransitionTime":"2026-01-27T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.464621 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:49 crc kubenswrapper[4786]: E0127 13:07:49.464791 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.565412 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.565508 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.565540 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.565573 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.565596 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:49Z","lastTransitionTime":"2026-01-27T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.668274 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.668315 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.668327 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.668351 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.668364 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:49Z","lastTransitionTime":"2026-01-27T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.771544 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.771585 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.771595 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.771637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.771651 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:49Z","lastTransitionTime":"2026-01-27T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.874182 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.874688 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.874818 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.874959 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.875098 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:49Z","lastTransitionTime":"2026-01-27T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.978438 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.978992 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.979137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.979291 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:49 crc kubenswrapper[4786]: I0127 13:07:49.979435 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:49Z","lastTransitionTime":"2026-01-27T13:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.083418 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.083479 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.083496 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.083529 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.083551 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:50Z","lastTransitionTime":"2026-01-27T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.187440 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.187492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.187503 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.187528 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.187543 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:50Z","lastTransitionTime":"2026-01-27T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.291019 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.291078 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.291099 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.291127 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.291147 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:50Z","lastTransitionTime":"2026-01-27T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.394032 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.394087 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.394097 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.394115 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.394126 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:50Z","lastTransitionTime":"2026-01-27T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.450768 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:50:45.949895346 +0000 UTC Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.468779 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:50 crc kubenswrapper[4786]: E0127 13:07:50.468990 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.469328 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:50 crc kubenswrapper[4786]: E0127 13:07:50.469527 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.469639 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:50 crc kubenswrapper[4786]: E0127 13:07:50.469960 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.496521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.496562 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.496574 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.496595 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.496632 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:50Z","lastTransitionTime":"2026-01-27T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.599755 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.599814 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.599828 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.599849 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.599861 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:50Z","lastTransitionTime":"2026-01-27T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.702543 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.702629 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.702649 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.702673 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.702689 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:50Z","lastTransitionTime":"2026-01-27T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.806547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.806692 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.806736 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.806774 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.806801 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:50Z","lastTransitionTime":"2026-01-27T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.910034 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.910109 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.910128 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.910155 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:50 crc kubenswrapper[4786]: I0127 13:07:50.910175 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:50Z","lastTransitionTime":"2026-01-27T13:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.013406 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.013454 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.013466 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.013488 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.013505 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:51Z","lastTransitionTime":"2026-01-27T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.117557 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.117692 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.117704 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.117729 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.117744 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:51Z","lastTransitionTime":"2026-01-27T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.220818 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.220871 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.220884 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.220908 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.220924 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:51Z","lastTransitionTime":"2026-01-27T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.326152 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.326330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.326352 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.327180 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.327600 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:51Z","lastTransitionTime":"2026-01-27T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.431664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.431715 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.431724 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.431741 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.431751 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:51Z","lastTransitionTime":"2026-01-27T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.451846 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:24:21.15303308 +0000 UTC Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.464694 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:51 crc kubenswrapper[4786]: E0127 13:07:51.465037 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.535239 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.535316 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.535339 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.535373 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.535436 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:51Z","lastTransitionTime":"2026-01-27T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.639388 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.639472 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.639491 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.639510 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.639523 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:51Z","lastTransitionTime":"2026-01-27T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.743377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.743448 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.743462 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.743485 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.743529 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:51Z","lastTransitionTime":"2026-01-27T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.847144 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.847219 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.847242 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.847270 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.847290 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:51Z","lastTransitionTime":"2026-01-27T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.950141 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.950198 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.950210 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.950226 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:51 crc kubenswrapper[4786]: I0127 13:07:51.950241 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:51Z","lastTransitionTime":"2026-01-27T13:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.053839 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.053917 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.053944 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.053980 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.054003 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:52Z","lastTransitionTime":"2026-01-27T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.158366 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.158428 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.158445 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.158472 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.158488 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:52Z","lastTransitionTime":"2026-01-27T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.261849 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.261923 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.261949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.261988 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.262013 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:52Z","lastTransitionTime":"2026-01-27T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.365314 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.365390 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.365416 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.365450 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.365474 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:52Z","lastTransitionTime":"2026-01-27T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.452713 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 06:19:37.555227025 +0000 UTC Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.464171 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.464189 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.464221 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:52 crc kubenswrapper[4786]: E0127 13:07:52.464466 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:52 crc kubenswrapper[4786]: E0127 13:07:52.464589 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:52 crc kubenswrapper[4786]: E0127 13:07:52.464912 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.468033 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.468083 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.468097 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.468121 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.468135 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:52Z","lastTransitionTime":"2026-01-27T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.570505 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.570568 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.570581 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.570637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.570653 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:52Z","lastTransitionTime":"2026-01-27T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.673890 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.673952 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.673968 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.673994 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.674015 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:52Z","lastTransitionTime":"2026-01-27T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.777811 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.777880 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.777899 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.777928 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.777951 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:52Z","lastTransitionTime":"2026-01-27T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.880983 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.881076 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.881106 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.881144 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.881172 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:52Z","lastTransitionTime":"2026-01-27T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.984108 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.984137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.984146 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.984202 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:52 crc kubenswrapper[4786]: I0127 13:07:52.984211 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:52Z","lastTransitionTime":"2026-01-27T13:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.086058 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.086325 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.086422 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.086544 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.086625 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:53Z","lastTransitionTime":"2026-01-27T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.190222 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.190550 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.190593 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.190689 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.190724 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:53Z","lastTransitionTime":"2026-01-27T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.295177 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.295220 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.295231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.295249 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.295262 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:53Z","lastTransitionTime":"2026-01-27T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.395959 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.396013 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.396025 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.396050 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.396069 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:53Z","lastTransitionTime":"2026-01-27T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:53 crc kubenswrapper[4786]: E0127 13:07:53.414906 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:53Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.420311 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.420351 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.420363 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.420428 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.420446 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:53Z","lastTransitionTime":"2026-01-27T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:53 crc kubenswrapper[4786]: E0127 13:07:53.440000 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:53Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.445397 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.445506 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.445527 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.445584 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.445655 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:53Z","lastTransitionTime":"2026-01-27T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.453304 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 02:05:25.127016728 +0000 UTC Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.464041 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:53 crc kubenswrapper[4786]: E0127 13:07:53.464270 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.466461 4786 scope.go:117] "RemoveContainer" containerID="2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a" Jan 27 13:07:53 crc kubenswrapper[4786]: E0127 13:07:53.470682 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:53Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.478545 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.478592 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.478623 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.478644 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.478705 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:53Z","lastTransitionTime":"2026-01-27T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:53 crc kubenswrapper[4786]: E0127 13:07:53.498481 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:53Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.504368 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.504514 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.504574 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.504664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.504709 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:53Z","lastTransitionTime":"2026-01-27T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:53 crc kubenswrapper[4786]: E0127 13:07:53.522283 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:53Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:53 crc kubenswrapper[4786]: E0127 13:07:53.522530 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.525090 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.525244 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.525365 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.525492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.525637 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:53Z","lastTransitionTime":"2026-01-27T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.629868 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.630373 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.630387 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.630432 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.630447 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:53Z","lastTransitionTime":"2026-01-27T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.733811 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.733860 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.733876 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.733898 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.733915 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:53Z","lastTransitionTime":"2026-01-27T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.817120 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovnkube-controller/1.log" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.820144 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerStarted","Data":"95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0"} Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.820857 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.837346 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.837418 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.837433 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.837455 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.837494 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:53Z","lastTransitionTime":"2026-01-27T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.842367 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bda5706f-b065-4472-a45c-baff1cca3c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://393d243fd50fbbc25df6261d62f7e4ddd7aee425cef5c4ccc0d8ff028cd34df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f3b3f74451aceef575e0385356b071170bdde56171fce36e1bdd5debdac37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823e47a91a33d234f34bb3e1529d300d469cab2bad4630ec73c30b9d669abe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:53Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.870509 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"ontroller openshift-machine-config-operator aa30290d-3a39-43ba-a212-6439bd680987 4486 0 2025-02-23 05:12:25 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:machine-config-controller] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b93ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:53Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.891995 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:53Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.914051 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:53Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.932000 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:53Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.942866 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.942924 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.942942 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.942972 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.942989 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:53Z","lastTransitionTime":"2026-01-27T13:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:53 crc kubenswrapper[4786]: I0127 13:07:53.947390 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:53Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.013429 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.027121 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.043547 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.045419 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.045466 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.045478 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.045495 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.045510 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:54Z","lastTransitionTime":"2026-01-27T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.064508 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.081411 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.096426 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.110821 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.122033 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.134991 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.148720 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.148785 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.148798 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.148820 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.148841 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:54Z","lastTransitionTime":"2026-01-27T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.164462 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.181633 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.201875 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.251231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.251285 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.251297 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.251317 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.251330 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:54Z","lastTransitionTime":"2026-01-27T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.354066 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.354119 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.354135 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.354156 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.354168 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:54Z","lastTransitionTime":"2026-01-27T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.454509 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:00:27.676100682 +0000 UTC Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.456046 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.456080 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.456092 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.456107 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.456119 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:54Z","lastTransitionTime":"2026-01-27T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.464330 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.464394 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:54 crc kubenswrapper[4786]: E0127 13:07:54.464457 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:54 crc kubenswrapper[4786]: E0127 13:07:54.464528 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.464345 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:54 crc kubenswrapper[4786]: E0127 13:07:54.464816 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.559786 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.559857 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.559867 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.559893 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.559907 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:54Z","lastTransitionTime":"2026-01-27T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.663632 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.663684 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.663697 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.663715 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.663727 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:54Z","lastTransitionTime":"2026-01-27T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.766069 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.766116 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.766128 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.766147 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.766157 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:54Z","lastTransitionTime":"2026-01-27T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.824673 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovnkube-controller/2.log" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.825361 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovnkube-controller/1.log" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.827918 4786 generic.go:334] "Generic (PLEG): container finished" podID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerID="95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0" exitCode=1 Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.827984 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerDied","Data":"95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0"} Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.828045 4786 scope.go:117] "RemoveContainer" containerID="2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.828843 4786 scope.go:117] "RemoveContainer" containerID="95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0" Jan 27 13:07:54 crc kubenswrapper[4786]: E0127 13:07:54.829028 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.846380 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.862355 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.868399 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.868428 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.868438 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.868453 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.868465 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:54Z","lastTransitionTime":"2026-01-27T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.878472 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.893393 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.906212 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.918493 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.931728 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.943845 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.955703 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.969027 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.971178 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.971229 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.971245 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.971267 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.971281 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:54Z","lastTransitionTime":"2026-01-27T13:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.984582 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:54 crc kubenswrapper[4786]: I0127 13:07:54.997690 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.025596 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:55Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.043590 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:55Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.058880 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:55Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.070673 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:55Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.074239 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.074300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.074313 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.074330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.074342 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:55Z","lastTransitionTime":"2026-01-27T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.086460 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bda5706f-b065-4472-a45c-baff1cca3c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://393d243fd50fbbc25df6261d62f7e4ddd7aee425cef5c4ccc0d8ff028cd34df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f3b3f74451aceef575e0385356b071170bdde56171fce36e1bdd5debdac37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823e47a91a33d234f34bb3e1529d300d469cab2bad4630ec73c30b9d669abe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:55Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.106080 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fbd39cd4c82438fefef67fef48b606b60ed2194292d1f6d839e0ff776c31f0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"ontroller openshift-machine-config-operator aa30290d-3a39-43ba-a212-6439bd680987 4486 0 2025-02-23 05:12:25 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:machine-config-controller] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b93ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:54Z\\\",\\\"message\\\":\\\"tion, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:07:54.419132 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419138 6505 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419124 6505 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-oauth-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"fe46cb89-4e54-4175-a112-1c5224cd299e\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-oauth-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Rou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:55Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.177289 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.177350 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.177362 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.177377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.177387 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:55Z","lastTransitionTime":"2026-01-27T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.279884 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.279933 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.279944 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.279959 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.279967 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:55Z","lastTransitionTime":"2026-01-27T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.382650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.382697 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.382715 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.382738 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.382750 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:55Z","lastTransitionTime":"2026-01-27T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.455839 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:58:39.785673479 +0000 UTC Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.464376 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:55 crc kubenswrapper[4786]: E0127 13:07:55.464564 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.485151 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.485192 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.485200 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.485215 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.485226 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:55Z","lastTransitionTime":"2026-01-27T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.588096 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.588189 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.588214 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.588250 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.588270 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:55Z","lastTransitionTime":"2026-01-27T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.691052 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.691114 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.691127 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.691146 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.691157 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:55Z","lastTransitionTime":"2026-01-27T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.793925 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.793998 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.794009 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.794028 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.794040 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:55Z","lastTransitionTime":"2026-01-27T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.833525 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovnkube-controller/2.log" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.837786 4786 scope.go:117] "RemoveContainer" containerID="95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0" Jan 27 13:07:55 crc kubenswrapper[4786]: E0127 13:07:55.838030 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.852331 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:55Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.867238 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:55Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.898197 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.898247 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.898258 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.898274 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.898284 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:55Z","lastTransitionTime":"2026-01-27T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.909332 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:55Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.928535 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:55Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.941640 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:55Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.954617 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:55Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.966414 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:55Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.979749 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:55Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.991263 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:55Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.999939 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.999975 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:55 crc kubenswrapper[4786]: I0127 13:07:55.999986 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.000001 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.000012 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:55Z","lastTransitionTime":"2026-01-27T13:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.002369 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:56Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.014777 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:56Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.035427 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:56Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.056406 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:54Z\\\",\\\"message\\\":\\\"tion, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:07:54.419132 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419138 6505 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419124 6505 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-oauth-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"fe46cb89-4e54-4175-a112-1c5224cd299e\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-oauth-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Rou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:56Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.069536 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bda5706f-b065-4472-a45c-baff1cca3c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://393d243fd50fbbc25df6261d62f7e4ddd7aee425cef5c4ccc0d8ff028cd34df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f3b3f74451aceef575e0385356b071170bdde56171fce36e1bdd5debdac37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823e47a91a33d234f34bb3e1529d300d469cab2bad4630ec73c30b9d669abe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:56Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.083400 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:56Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.098065 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:56Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.102630 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.102659 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.102667 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.102680 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.102690 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:56Z","lastTransitionTime":"2026-01-27T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.114692 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:56Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.126047 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:56Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.208520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.208933 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.209103 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.209160 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.209173 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:56Z","lastTransitionTime":"2026-01-27T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.311939 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.311988 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.311997 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.312013 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.312023 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:56Z","lastTransitionTime":"2026-01-27T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.414844 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.414878 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.414887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.414899 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.414908 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:56Z","lastTransitionTime":"2026-01-27T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.456823 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 07:39:44.512974437 +0000 UTC Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.464370 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:56 crc kubenswrapper[4786]: E0127 13:07:56.464506 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.464758 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:56 crc kubenswrapper[4786]: E0127 13:07:56.464831 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.464913 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:56 crc kubenswrapper[4786]: E0127 13:07:56.465123 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.518168 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.518209 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.518220 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.518237 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.518249 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:56Z","lastTransitionTime":"2026-01-27T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.621145 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.621193 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.621206 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.621222 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.621235 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:56Z","lastTransitionTime":"2026-01-27T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.723787 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.723840 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.723856 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.723886 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.723896 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:56Z","lastTransitionTime":"2026-01-27T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.827525 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.827573 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.827585 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.827622 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.827633 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:56Z","lastTransitionTime":"2026-01-27T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.930282 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.930326 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.930335 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.930348 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:56 crc kubenswrapper[4786]: I0127 13:07:56.930357 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:56Z","lastTransitionTime":"2026-01-27T13:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.033223 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.033279 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.033295 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.033313 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.033326 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:57Z","lastTransitionTime":"2026-01-27T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.135685 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.135727 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.135737 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.135754 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.135764 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:57Z","lastTransitionTime":"2026-01-27T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.238780 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.238836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.238847 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.238868 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.238881 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:57Z","lastTransitionTime":"2026-01-27T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.341583 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.341652 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.341662 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.341677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.341685 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:57Z","lastTransitionTime":"2026-01-27T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.444193 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.444261 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.444276 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.444298 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.444311 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:57Z","lastTransitionTime":"2026-01-27T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.456974 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:18:04.575710084 +0000 UTC Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.464023 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:57 crc kubenswrapper[4786]: E0127 13:07:57.464229 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.480028 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bda5706f-b065-4472-a45c-baff1cca3c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://393d243fd50fbbc25df6261d62f7e4ddd7aee425cef5c4ccc0d8ff028cd34df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f3b3f74451aceef575e0385356b071170bdde56171fce36e1bdd5debdac37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823e47a91a33d234f34bb3e1529d300d469cab2bad4630ec73c30b9d669abe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.501432 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:54Z\\\",\\\"message\\\":\\\"tion, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:07:54.419132 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419138 6505 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419124 6505 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-oauth-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"fe46cb89-4e54-4175-a112-1c5224cd299e\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-oauth-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Rou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.515669 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.533781 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.546546 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.546624 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.546636 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.546651 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.546663 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:57Z","lastTransitionTime":"2026-01-27T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.550683 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.569235 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.586698 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.603322 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.617357 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.628757 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.637427 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.648623 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.648658 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.648668 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.648680 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.648689 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:57Z","lastTransitionTime":"2026-01-27T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.654259 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.666922 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.727952 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.742831 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.750890 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.750930 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.750941 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.750957 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.750969 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:57Z","lastTransitionTime":"2026-01-27T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.755629 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.775198 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.789461 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:57Z is after 2025-08-24T17:21:41Z" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.853272 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.853334 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.853346 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.853365 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.853382 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:57Z","lastTransitionTime":"2026-01-27T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.956224 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.956268 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.956276 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.956294 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:57 crc kubenswrapper[4786]: I0127 13:07:57.956305 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:57Z","lastTransitionTime":"2026-01-27T13:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.059258 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.059300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.059309 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.059323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.059333 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:58Z","lastTransitionTime":"2026-01-27T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.162768 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.162824 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.162833 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.162853 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.162864 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:58Z","lastTransitionTime":"2026-01-27T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.265333 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.265408 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.265429 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.265462 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.265484 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:58Z","lastTransitionTime":"2026-01-27T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.368277 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.368325 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.368337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.368358 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.368370 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:58Z","lastTransitionTime":"2026-01-27T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.457357 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 14:03:45.196324604 +0000 UTC Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.464789 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.464860 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.464796 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:07:58 crc kubenswrapper[4786]: E0127 13:07:58.465065 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:07:58 crc kubenswrapper[4786]: E0127 13:07:58.465202 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:07:58 crc kubenswrapper[4786]: E0127 13:07:58.465340 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.470586 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.470635 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.470644 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.470660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.470670 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:58Z","lastTransitionTime":"2026-01-27T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.573521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.573565 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.573573 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.573587 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.573597 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:58Z","lastTransitionTime":"2026-01-27T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.676941 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.677236 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.677297 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.677366 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.677421 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:58Z","lastTransitionTime":"2026-01-27T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.779966 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.780543 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.780647 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.780734 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.780826 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:58Z","lastTransitionTime":"2026-01-27T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.883857 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.884095 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.884191 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.884255 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.884322 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:58Z","lastTransitionTime":"2026-01-27T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.987379 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.987451 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.987468 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.987494 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:58 crc kubenswrapper[4786]: I0127 13:07:58.987511 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:58Z","lastTransitionTime":"2026-01-27T13:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.091299 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.091348 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.091406 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.091448 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.091467 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:59Z","lastTransitionTime":"2026-01-27T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.195898 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.195972 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.196010 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.196055 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.196093 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:59Z","lastTransitionTime":"2026-01-27T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.299029 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.299531 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.299640 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.299727 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.299802 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:59Z","lastTransitionTime":"2026-01-27T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.402936 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.402979 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.402994 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.403016 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.403031 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:59Z","lastTransitionTime":"2026-01-27T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.457561 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:53:36.183523761 +0000 UTC Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.464078 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:07:59 crc kubenswrapper[4786]: E0127 13:07:59.464304 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.506215 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.506642 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.506790 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.506901 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.507064 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:59Z","lastTransitionTime":"2026-01-27T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.610340 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.610906 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.611061 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.611202 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.611320 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:59Z","lastTransitionTime":"2026-01-27T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.714477 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.714549 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.714570 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.714597 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.714641 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:59Z","lastTransitionTime":"2026-01-27T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.819032 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.819082 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.819098 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.819119 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.819133 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:59Z","lastTransitionTime":"2026-01-27T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.922926 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.922998 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.923017 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.923039 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:07:59 crc kubenswrapper[4786]: I0127 13:07:59.923057 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:07:59Z","lastTransitionTime":"2026-01-27T13:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.025950 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.026004 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.026020 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.026045 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.026065 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:00Z","lastTransitionTime":"2026-01-27T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.130173 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.130212 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.130225 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.130243 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.130259 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:00Z","lastTransitionTime":"2026-01-27T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.259506 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.260096 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.260285 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.260503 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.260745 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:00Z","lastTransitionTime":"2026-01-27T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.364708 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.364787 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.364809 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.364834 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.364855 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:00Z","lastTransitionTime":"2026-01-27T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.458577 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 11:33:27.722962162 +0000 UTC Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.464046 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:00 crc kubenswrapper[4786]: E0127 13:08:00.464302 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.464698 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:00 crc kubenswrapper[4786]: E0127 13:08:00.464806 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.464989 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:00 crc kubenswrapper[4786]: E0127 13:08:00.465151 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.467676 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.467725 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.467743 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.467766 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.467781 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:00Z","lastTransitionTime":"2026-01-27T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.572446 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.572478 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.572487 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.572502 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.572512 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:00Z","lastTransitionTime":"2026-01-27T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.676734 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.676789 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.676802 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.676825 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.676840 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:00Z","lastTransitionTime":"2026-01-27T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.780025 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.780452 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.780580 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.780687 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.780758 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:00Z","lastTransitionTime":"2026-01-27T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.883050 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.883102 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.883120 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.883138 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.883149 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:00Z","lastTransitionTime":"2026-01-27T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.986497 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.986578 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.986601 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.986673 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:00 crc kubenswrapper[4786]: I0127 13:08:00.986693 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:00Z","lastTransitionTime":"2026-01-27T13:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.090137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.090200 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.090218 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.090247 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.090266 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:01Z","lastTransitionTime":"2026-01-27T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.204325 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.204406 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.204438 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.204467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.204508 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:01Z","lastTransitionTime":"2026-01-27T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.307796 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.307866 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.307892 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.307920 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.307942 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:01Z","lastTransitionTime":"2026-01-27T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.411964 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.412022 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.412032 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.412049 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.412062 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:01Z","lastTransitionTime":"2026-01-27T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.459696 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 20:07:06.163933538 +0000 UTC Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.464894 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:01 crc kubenswrapper[4786]: E0127 13:08:01.465050 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.516078 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.516122 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.516132 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.516146 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.516155 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:01Z","lastTransitionTime":"2026-01-27T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.618193 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.618255 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.618269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.618286 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.618735 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:01Z","lastTransitionTime":"2026-01-27T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.721327 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.721368 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.721376 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.721390 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.721400 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:01Z","lastTransitionTime":"2026-01-27T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.823656 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.823712 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.823720 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.823736 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.823747 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:01Z","lastTransitionTime":"2026-01-27T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.926150 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.926189 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.926200 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.926216 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:01 crc kubenswrapper[4786]: I0127 13:08:01.926229 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:01Z","lastTransitionTime":"2026-01-27T13:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.029687 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.029743 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.029763 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.029786 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.029804 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:02Z","lastTransitionTime":"2026-01-27T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.132751 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.132794 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.132804 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.132820 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.132830 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:02Z","lastTransitionTime":"2026-01-27T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.235187 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.235239 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.235252 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.235271 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.235286 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:02Z","lastTransitionTime":"2026-01-27T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.338332 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.338377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.338389 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.338410 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.338423 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:02Z","lastTransitionTime":"2026-01-27T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.440902 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.440940 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.440948 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.440962 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.440972 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:02Z","lastTransitionTime":"2026-01-27T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.459851 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:56:47.043496198 +0000 UTC Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.464169 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.464216 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:02 crc kubenswrapper[4786]: E0127 13:08:02.464304 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.464417 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:02 crc kubenswrapper[4786]: E0127 13:08:02.464537 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:02 crc kubenswrapper[4786]: E0127 13:08:02.464656 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.543041 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.543087 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.543100 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.543115 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.543126 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:02Z","lastTransitionTime":"2026-01-27T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.645302 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.645344 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.645356 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.645370 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.645380 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:02Z","lastTransitionTime":"2026-01-27T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.747840 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.747876 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.747885 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.747899 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.747910 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:02Z","lastTransitionTime":"2026-01-27T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.850292 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.850328 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.850337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.850350 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.850360 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:02Z","lastTransitionTime":"2026-01-27T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.953101 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.953172 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.953185 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.953200 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:02 crc kubenswrapper[4786]: I0127 13:08:02.953210 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:02Z","lastTransitionTime":"2026-01-27T13:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.055373 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.055441 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.055458 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.055483 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.055497 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:03Z","lastTransitionTime":"2026-01-27T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.158187 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.158235 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.158246 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.158263 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.158276 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:03Z","lastTransitionTime":"2026-01-27T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.260390 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.260423 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.260431 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.260443 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.260451 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:03Z","lastTransitionTime":"2026-01-27T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.362664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.362713 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.362724 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.362750 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.362762 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:03Z","lastTransitionTime":"2026-01-27T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.460618 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:53:06.512677023 +0000 UTC Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.463945 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:03 crc kubenswrapper[4786]: E0127 13:08:03.464090 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.465124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.465189 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.465213 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.465245 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.465266 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:03Z","lastTransitionTime":"2026-01-27T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.532373 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.532434 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.532446 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.532462 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.532473 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:03Z","lastTransitionTime":"2026-01-27T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:03 crc kubenswrapper[4786]: E0127 13:08:03.552165 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.556485 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.556511 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.556519 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.556531 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.556539 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:03Z","lastTransitionTime":"2026-01-27T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:03 crc kubenswrapper[4786]: E0127 13:08:03.568755 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.573274 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.573331 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.573348 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.573904 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.573957 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:03Z","lastTransitionTime":"2026-01-27T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:03 crc kubenswrapper[4786]: E0127 13:08:03.588114 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.592084 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.592250 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.592374 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.592501 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.592644 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:03Z","lastTransitionTime":"2026-01-27T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:03 crc kubenswrapper[4786]: E0127 13:08:03.606273 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.609147 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.609181 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.609190 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.609375 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.609385 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:03Z","lastTransitionTime":"2026-01-27T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:03 crc kubenswrapper[4786]: E0127 13:08:03.620259 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:03Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:03 crc kubenswrapper[4786]: E0127 13:08:03.620379 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.621855 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.621881 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.621889 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.621902 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.621912 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:03Z","lastTransitionTime":"2026-01-27T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.724522 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.724770 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.724839 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.724897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.724952 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:03Z","lastTransitionTime":"2026-01-27T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.827154 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.827187 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.827198 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.827211 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.827230 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:03Z","lastTransitionTime":"2026-01-27T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.929451 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.929496 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.929509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.929526 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:03 crc kubenswrapper[4786]: I0127 13:08:03.929541 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:03Z","lastTransitionTime":"2026-01-27T13:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.031744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.031806 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.031827 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.031854 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.031872 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:04Z","lastTransitionTime":"2026-01-27T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.123887 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs\") pod \"network-metrics-daemon-8jf77\" (UID: \"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\") " pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:04 crc kubenswrapper[4786]: E0127 13:08:04.124145 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:08:04 crc kubenswrapper[4786]: E0127 13:08:04.124304 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs podName:a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496 nodeName:}" failed. No retries permitted until 2026-01-27 13:08:36.124267249 +0000 UTC m=+99.334881408 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs") pod "network-metrics-daemon-8jf77" (UID: "a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.134226 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.134323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.134339 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.134359 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.134375 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:04Z","lastTransitionTime":"2026-01-27T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.236454 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.236508 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.236520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.236596 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.236654 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:04Z","lastTransitionTime":"2026-01-27T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.339017 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.339048 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.339059 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.339074 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.339085 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:04Z","lastTransitionTime":"2026-01-27T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.441799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.441854 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.441871 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.441894 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.441910 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:04Z","lastTransitionTime":"2026-01-27T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.461462 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 14:08:29.693874341 +0000 UTC Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.464731 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.464888 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:04 crc kubenswrapper[4786]: E0127 13:08:04.464973 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.464987 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:04 crc kubenswrapper[4786]: E0127 13:08:04.465107 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:04 crc kubenswrapper[4786]: E0127 13:08:04.465202 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.543830 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.543881 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.543891 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.543908 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.543920 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:04Z","lastTransitionTime":"2026-01-27T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.645835 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.645883 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.645894 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.645911 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.645922 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:04Z","lastTransitionTime":"2026-01-27T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.748697 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.748769 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.748791 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.748825 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.748849 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:04Z","lastTransitionTime":"2026-01-27T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.851423 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.851458 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.851466 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.851478 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.851488 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:04Z","lastTransitionTime":"2026-01-27T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.954169 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.954218 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.954229 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.954244 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:04 crc kubenswrapper[4786]: I0127 13:08:04.954254 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:04Z","lastTransitionTime":"2026-01-27T13:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.056271 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.056311 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.056321 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.056339 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.056349 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:05Z","lastTransitionTime":"2026-01-27T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.158975 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.159016 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.159026 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.159043 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.159055 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:05Z","lastTransitionTime":"2026-01-27T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.261732 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.261787 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.261799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.261817 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.261829 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:05Z","lastTransitionTime":"2026-01-27T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.364118 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.364171 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.364187 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.364211 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.364228 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:05Z","lastTransitionTime":"2026-01-27T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.462571 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 09:40:42.465260367 +0000 UTC Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.464081 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:05 crc kubenswrapper[4786]: E0127 13:08:05.464272 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.465879 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.465916 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.465927 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.465943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.465956 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:05Z","lastTransitionTime":"2026-01-27T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.568813 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.569053 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.569142 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.569203 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.569256 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:05Z","lastTransitionTime":"2026-01-27T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.671418 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.671460 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.671472 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.671488 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.671500 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:05Z","lastTransitionTime":"2026-01-27T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.773904 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.773949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.773957 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.773971 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.773983 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:05Z","lastTransitionTime":"2026-01-27T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.877010 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.877056 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.877068 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.877085 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.877108 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:05Z","lastTransitionTime":"2026-01-27T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.979773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.979810 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.979820 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.979833 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:05 crc kubenswrapper[4786]: I0127 13:08:05.979842 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:05Z","lastTransitionTime":"2026-01-27T13:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.082180 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.082219 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.082232 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.082247 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.082260 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:06Z","lastTransitionTime":"2026-01-27T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.185217 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.185329 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.185342 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.185358 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.185370 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:06Z","lastTransitionTime":"2026-01-27T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.288864 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.288927 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.288942 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.289426 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.289474 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:06Z","lastTransitionTime":"2026-01-27T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.392151 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.392200 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.392211 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.392226 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.392236 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:06Z","lastTransitionTime":"2026-01-27T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.462895 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:33:13.536830607 +0000 UTC Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.464092 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.464131 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.464154 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:06 crc kubenswrapper[4786]: E0127 13:08:06.464224 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:06 crc kubenswrapper[4786]: E0127 13:08:06.464283 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:06 crc kubenswrapper[4786]: E0127 13:08:06.464355 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.494954 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.495435 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.495497 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.495556 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.495623 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:06Z","lastTransitionTime":"2026-01-27T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.597995 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.598033 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.598042 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.598055 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.598063 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:06Z","lastTransitionTime":"2026-01-27T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.700245 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.700298 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.700312 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.700332 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.700348 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:06Z","lastTransitionTime":"2026-01-27T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.803136 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.803174 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.803184 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.803197 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.803206 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:06Z","lastTransitionTime":"2026-01-27T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.871339 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9q6dk_a290f38c-b94c-4233-9d98-9a54a728cedb/kube-multus/0.log" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.871878 4786 generic.go:334] "Generic (PLEG): container finished" podID="a290f38c-b94c-4233-9d98-9a54a728cedb" containerID="8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60" exitCode=1 Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.871965 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9q6dk" event={"ID":"a290f38c-b94c-4233-9d98-9a54a728cedb","Type":"ContainerDied","Data":"8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60"} Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.872954 4786 scope.go:117] "RemoveContainer" containerID="8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.887037 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bda5706f-b065-4472-a45c-baff1cca3c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://393d243fd50fbbc25df6261d62f7e4ddd7aee425cef5c4ccc0d8ff028cd34df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f3b3f74451aceef575e0385356b071170bdde56171fce36e1bdd5debdac37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823e47a91a33d234f34bb3e1529d300d469cab2bad4630ec73c30b9d669abe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.905418 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.905458 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.905467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.905507 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.905519 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:06Z","lastTransitionTime":"2026-01-27T13:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.907820 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:54Z\\\",\\\"message\\\":\\\"tion, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:07:54.419132 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419138 6505 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419124 6505 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-oauth-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"fe46cb89-4e54-4175-a112-1c5224cd299e\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-oauth-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Rou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.920435 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.935866 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.950088 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.960800 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.975943 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:06 crc kubenswrapper[4786]: I0127 13:08:06.989259 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.003665 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.016499 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.016754 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.016807 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.016822 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.016847 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.016859 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:07Z","lastTransitionTime":"2026-01-27T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.029760 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.040387 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.051597 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.071247 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.083654 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.096484 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:08:06Z\\\",\\\"message\\\":\\\"2026-01-27T13:07:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1a21eda6-c61f-48b2-962f-091ea4922321\\\\n2026-01-27T13:07:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1a21eda6-c61f-48b2-962f-091ea4922321 to /host/opt/cni/bin/\\\\n2026-01-27T13:07:21Z [verbose] multus-daemon started\\\\n2026-01-27T13:07:21Z [verbose] Readiness Indicator file check\\\\n2026-01-27T13:08:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.105501 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.117617 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.119431 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.119465 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.119476 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.119492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.119504 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:07Z","lastTransitionTime":"2026-01-27T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.221880 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.221931 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.221939 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.221955 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.221965 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:07Z","lastTransitionTime":"2026-01-27T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.323852 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.323898 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.323910 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.323926 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.323937 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:07Z","lastTransitionTime":"2026-01-27T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.426569 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.426634 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.426646 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.426663 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.426700 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:07Z","lastTransitionTime":"2026-01-27T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.463954 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:51:02.220198753 +0000 UTC Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.464135 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:07 crc kubenswrapper[4786]: E0127 13:08:07.464280 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.465041 4786 scope.go:117] "RemoveContainer" containerID="95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0" Jan 27 13:08:07 crc kubenswrapper[4786]: E0127 13:08:07.465340 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.476782 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.485303 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.498024 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.507903 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.520006 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.529488 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.529515 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.529524 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.529538 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.529548 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:07Z","lastTransitionTime":"2026-01-27T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.533736 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.545408 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.556335 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.566771 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.575562 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.588112 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.599770 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.611028 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:08:06Z\\\",\\\"message\\\":\\\"2026-01-27T13:07:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1a21eda6-c61f-48b2-962f-091ea4922321\\\\n2026-01-27T13:07:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1a21eda6-c61f-48b2-962f-091ea4922321 to /host/opt/cni/bin/\\\\n2026-01-27T13:07:21Z [verbose] multus-daemon started\\\\n2026-01-27T13:07:21Z [verbose] Readiness Indicator file check\\\\n2026-01-27T13:08:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.621857 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.631796 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.631829 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.631840 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.631856 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.631891 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:07Z","lastTransitionTime":"2026-01-27T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.633433 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.654731 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.672812 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:54Z\\\",\\\"message\\\":\\\"tion, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:07:54.419132 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419138 6505 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419124 6505 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-oauth-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"fe46cb89-4e54-4175-a112-1c5224cd299e\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-oauth-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Rou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.685699 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bda5706f-b065-4472-a45c-baff1cca3c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://393d243fd50fbbc25df6261d62f7e4ddd7aee425cef5c4ccc0d8ff028cd34df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f3b3f74451aceef575e0385356b071170bdde56171fce36e1bdd5debdac37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823e47a91a33d234f34bb3e1529d300d469cab2bad4630ec73c30b9d669abe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.734253 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.734286 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.734293 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.734306 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.734314 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:07Z","lastTransitionTime":"2026-01-27T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.836467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.836499 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.836507 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.836522 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.836532 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:07Z","lastTransitionTime":"2026-01-27T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.876313 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9q6dk_a290f38c-b94c-4233-9d98-9a54a728cedb/kube-multus/0.log" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.876376 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9q6dk" event={"ID":"a290f38c-b94c-4233-9d98-9a54a728cedb","Type":"ContainerStarted","Data":"e07ac0a78f8e6cfc9b374b179d250b45f9a86c225a033ed2c706b709a3007b7f"} Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.897484 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.910024 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.923084 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac0a78f8e6cfc9b374b179d250b45f9a86c225a033ed2c706b709a3007b7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:08:06Z\\\",\\\"message\\\":\\\"2026-01-27T13:07:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1a21eda6-c61f-48b2-962f-091ea4922321\\\\n2026-01-27T13:07:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1a21eda6-c61f-48b2-962f-091ea4922321 to /host/opt/cni/bin/\\\\n2026-01-27T13:07:21Z [verbose] multus-daemon started\\\\n2026-01-27T13:07:21Z [verbose] Readiness Indicator file check\\\\n2026-01-27T13:08:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.935077 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.945162 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.945238 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.945253 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.945294 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.945308 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:07Z","lastTransitionTime":"2026-01-27T13:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.947742 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.960451 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bda5706f-b065-4472-a45c-baff1cca3c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://393d243fd50fbbc25df6261d62f7e4ddd7aee425cef5c4ccc0d8ff028cd34df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f3b3f74451aceef575e0385356b071170bdde56171fce36e1bdd5debdac37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823e47a91a33d234f34bb3e1529d300d469cab2bad4630ec73c30b9d669abe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.979493 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:54Z\\\",\\\"message\\\":\\\"tion, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:07:54.419132 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419138 6505 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419124 6505 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-oauth-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"fe46cb89-4e54-4175-a112-1c5224cd299e\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-oauth-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Rou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:07 crc kubenswrapper[4786]: I0127 13:08:07.994793 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.007753 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.021763 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.031964 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.042580 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.047550 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.047585 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.047594 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.047626 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.047660 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:08Z","lastTransitionTime":"2026-01-27T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.052342 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.064162 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.074583 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.086564 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.149746 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.149946 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.149969 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.149977 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.149990 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.150009 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:08Z","lastTransitionTime":"2026-01-27T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.162928 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.251905 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.251989 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.252005 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.252038 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.252066 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:08Z","lastTransitionTime":"2026-01-27T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.354342 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.354432 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.354450 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.354469 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.354481 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:08Z","lastTransitionTime":"2026-01-27T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.457050 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.457118 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.457129 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.457149 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.457160 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:08Z","lastTransitionTime":"2026-01-27T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.464401 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.464441 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 09:13:28.484835382 +0000 UTC Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.464504 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:08 crc kubenswrapper[4786]: E0127 13:08:08.464552 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.464501 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:08 crc kubenswrapper[4786]: E0127 13:08:08.464667 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:08 crc kubenswrapper[4786]: E0127 13:08:08.464915 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.559521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.559572 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.559582 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.559630 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.559648 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:08Z","lastTransitionTime":"2026-01-27T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.662011 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.662049 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.662058 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.662071 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.662082 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:08Z","lastTransitionTime":"2026-01-27T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.764711 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.764752 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.764761 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.764776 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.764786 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:08Z","lastTransitionTime":"2026-01-27T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.867639 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.867675 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.867687 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.867702 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.867711 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:08Z","lastTransitionTime":"2026-01-27T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.970474 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.970523 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.970535 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.970553 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:08 crc kubenswrapper[4786]: I0127 13:08:08.970566 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:08Z","lastTransitionTime":"2026-01-27T13:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.072909 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.072943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.072950 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.072965 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.072975 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:09Z","lastTransitionTime":"2026-01-27T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.175740 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.175810 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.175822 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.175837 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.175847 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:09Z","lastTransitionTime":"2026-01-27T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.279594 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.279676 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.279689 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.279708 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.279722 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:09Z","lastTransitionTime":"2026-01-27T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.384070 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.384104 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.384111 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.384123 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.384132 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:09Z","lastTransitionTime":"2026-01-27T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.464758 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 17:23:27.733597562 +0000 UTC Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.464952 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:09 crc kubenswrapper[4786]: E0127 13:08:09.465091 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.486652 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.486705 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.486718 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.486732 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.486744 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:09Z","lastTransitionTime":"2026-01-27T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.590455 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.591050 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.591076 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.591093 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.591104 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:09Z","lastTransitionTime":"2026-01-27T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.694039 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.694085 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.694098 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.694116 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.694127 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:09Z","lastTransitionTime":"2026-01-27T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.796585 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.796691 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.796710 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.796735 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.796753 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:09Z","lastTransitionTime":"2026-01-27T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.898589 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.898651 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.898659 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.898671 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:09 crc kubenswrapper[4786]: I0127 13:08:09.898681 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:09Z","lastTransitionTime":"2026-01-27T13:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.000542 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.000595 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.000625 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.000643 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.000657 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:10Z","lastTransitionTime":"2026-01-27T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.103947 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.104014 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.104030 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.104056 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.104071 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:10Z","lastTransitionTime":"2026-01-27T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.206697 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.206772 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.206792 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.206822 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.206854 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:10Z","lastTransitionTime":"2026-01-27T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.309380 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.309433 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.309445 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.309461 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.309474 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:10Z","lastTransitionTime":"2026-01-27T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.413112 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.413161 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.413172 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.413190 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.413203 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:10Z","lastTransitionTime":"2026-01-27T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.463895 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:10 crc kubenswrapper[4786]: E0127 13:08:10.464040 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.464212 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:10 crc kubenswrapper[4786]: E0127 13:08:10.464312 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.464415 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:10 crc kubenswrapper[4786]: E0127 13:08:10.464459 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.464968 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 09:45:20.939141962 +0000 UTC Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.515897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.515952 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.515963 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.515980 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.515992 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:10Z","lastTransitionTime":"2026-01-27T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.618736 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.618792 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.618804 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.618823 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.618836 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:10Z","lastTransitionTime":"2026-01-27T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.721573 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.721637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.721649 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.721666 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.721678 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:10Z","lastTransitionTime":"2026-01-27T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.823575 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.823632 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.823643 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.823659 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.823671 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:10Z","lastTransitionTime":"2026-01-27T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.925926 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.925981 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.925991 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.926005 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:10 crc kubenswrapper[4786]: I0127 13:08:10.926014 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:10Z","lastTransitionTime":"2026-01-27T13:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.028904 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.028939 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.028947 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.028960 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.028969 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:11Z","lastTransitionTime":"2026-01-27T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.131429 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.131468 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.131476 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.131488 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.131498 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:11Z","lastTransitionTime":"2026-01-27T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.233834 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.233878 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.233887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.233901 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.233911 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:11Z","lastTransitionTime":"2026-01-27T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.335842 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.335880 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.335889 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.335901 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.335909 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:11Z","lastTransitionTime":"2026-01-27T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.438892 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.438937 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.438949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.438965 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.438979 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:11Z","lastTransitionTime":"2026-01-27T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.464802 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:11 crc kubenswrapper[4786]: E0127 13:08:11.465189 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.465269 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:47:28.598304307 +0000 UTC Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.541633 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.541690 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.541703 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.541722 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.541732 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:11Z","lastTransitionTime":"2026-01-27T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.644276 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.644324 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.644335 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.644354 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.644367 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:11Z","lastTransitionTime":"2026-01-27T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.746680 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.746718 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.746731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.746748 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.746762 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:11Z","lastTransitionTime":"2026-01-27T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.849107 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.849139 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.849147 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.849161 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.849168 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:11Z","lastTransitionTime":"2026-01-27T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.951325 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.951365 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.951376 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.951392 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:11 crc kubenswrapper[4786]: I0127 13:08:11.951403 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:11Z","lastTransitionTime":"2026-01-27T13:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.054114 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.054152 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.054163 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.054178 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.054191 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:12Z","lastTransitionTime":"2026-01-27T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.157336 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.157469 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.157488 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.157505 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.157518 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:12Z","lastTransitionTime":"2026-01-27T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.260058 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.260107 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.260125 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.260150 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.260169 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:12Z","lastTransitionTime":"2026-01-27T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.362895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.362929 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.362939 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.362955 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.362966 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:12Z","lastTransitionTime":"2026-01-27T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.464336 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:12 crc kubenswrapper[4786]: E0127 13:08:12.464559 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.464733 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.464828 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:12 crc kubenswrapper[4786]: E0127 13:08:12.464888 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:12 crc kubenswrapper[4786]: E0127 13:08:12.464928 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.466462 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 17:49:15.162099501 +0000 UTC Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.466996 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.467037 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.467053 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.467073 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.467089 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:12Z","lastTransitionTime":"2026-01-27T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.570303 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.570367 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.570377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.570392 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.570401 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:12Z","lastTransitionTime":"2026-01-27T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.674107 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.674156 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.674166 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.674179 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.674189 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:12Z","lastTransitionTime":"2026-01-27T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.777203 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.777266 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.777282 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.777301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.777312 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:12Z","lastTransitionTime":"2026-01-27T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.879596 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.879651 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.879661 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.879674 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.879684 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:12Z","lastTransitionTime":"2026-01-27T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.982231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.982284 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.982296 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.982338 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:12 crc kubenswrapper[4786]: I0127 13:08:12.982350 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:12Z","lastTransitionTime":"2026-01-27T13:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.084448 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.084525 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.084543 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.084559 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.084571 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:13Z","lastTransitionTime":"2026-01-27T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.186929 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.186972 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.186981 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.186994 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.187002 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:13Z","lastTransitionTime":"2026-01-27T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.289191 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.289234 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.289242 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.289254 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.289263 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:13Z","lastTransitionTime":"2026-01-27T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.391292 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.391331 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.391339 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.391352 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.391364 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:13Z","lastTransitionTime":"2026-01-27T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.464525 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:13 crc kubenswrapper[4786]: E0127 13:08:13.464893 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.466849 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:11:34.829907152 +0000 UTC Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.476692 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.493439 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.493513 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.493527 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.493544 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.493557 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:13Z","lastTransitionTime":"2026-01-27T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.595731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.595771 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.595782 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.595798 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.595809 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:13Z","lastTransitionTime":"2026-01-27T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.698083 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.698125 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.698135 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.698148 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.698158 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:13Z","lastTransitionTime":"2026-01-27T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.800832 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.800886 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.800894 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.800912 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.800922 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:13Z","lastTransitionTime":"2026-01-27T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.846597 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.846662 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.846677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.846692 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.846703 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:13Z","lastTransitionTime":"2026-01-27T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:13 crc kubenswrapper[4786]: E0127 13:08:13.863032 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.867458 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.867502 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.867516 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.867534 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.867548 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:13Z","lastTransitionTime":"2026-01-27T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:13 crc kubenswrapper[4786]: E0127 13:08:13.881818 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.886359 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.886395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.886403 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.886416 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.886425 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:13Z","lastTransitionTime":"2026-01-27T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:13 crc kubenswrapper[4786]: E0127 13:08:13.904534 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.908872 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.908939 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.908963 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.908995 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.909019 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:13Z","lastTransitionTime":"2026-01-27T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:13 crc kubenswrapper[4786]: E0127 13:08:13.922882 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.927365 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.927403 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.927414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.927431 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.927443 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:13Z","lastTransitionTime":"2026-01-27T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:13 crc kubenswrapper[4786]: E0127 13:08:13.945394 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:13 crc kubenswrapper[4786]: E0127 13:08:13.945518 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.947324 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.947402 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.947426 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.947456 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:13 crc kubenswrapper[4786]: I0127 13:08:13.947480 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:13Z","lastTransitionTime":"2026-01-27T13:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.049664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.049691 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.049699 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.049713 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.049721 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:14Z","lastTransitionTime":"2026-01-27T13:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.151576 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.151652 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.151663 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.151680 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.151697 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:14Z","lastTransitionTime":"2026-01-27T13:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.255417 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.255497 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.255519 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.255549 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.255569 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:14Z","lastTransitionTime":"2026-01-27T13:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.358092 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.358175 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.358196 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.358224 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.358249 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:14Z","lastTransitionTime":"2026-01-27T13:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.461886 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.461979 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.461998 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.462018 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.462034 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:14Z","lastTransitionTime":"2026-01-27T13:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.464211 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.464306 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:14 crc kubenswrapper[4786]: E0127 13:08:14.464496 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.464543 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:14 crc kubenswrapper[4786]: E0127 13:08:14.464714 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:14 crc kubenswrapper[4786]: E0127 13:08:14.464816 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.467450 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 10:13:03.67912737 +0000 UTC Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.565302 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.565354 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.565370 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.565390 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.565404 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:14Z","lastTransitionTime":"2026-01-27T13:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.668955 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.669029 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.669054 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.669084 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.669107 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:14Z","lastTransitionTime":"2026-01-27T13:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.773005 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.773068 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.773086 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.773109 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.773125 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:14Z","lastTransitionTime":"2026-01-27T13:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.875887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.875940 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.875952 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.875970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.875986 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:14Z","lastTransitionTime":"2026-01-27T13:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.978095 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.978149 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.978163 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.978181 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:14 crc kubenswrapper[4786]: I0127 13:08:14.978193 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:14Z","lastTransitionTime":"2026-01-27T13:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.082062 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.082111 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.082121 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.082137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.082148 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:15Z","lastTransitionTime":"2026-01-27T13:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.185322 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.185379 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.185398 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.185421 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.185439 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:15Z","lastTransitionTime":"2026-01-27T13:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.287967 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.288010 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.288018 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.288034 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.288045 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:15Z","lastTransitionTime":"2026-01-27T13:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.391265 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.391730 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.391749 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.391773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.391791 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:15Z","lastTransitionTime":"2026-01-27T13:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.464132 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:15 crc kubenswrapper[4786]: E0127 13:08:15.464331 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.467851 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 07:16:23.646794465 +0000 UTC Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.494276 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.494325 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.494336 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.494351 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.494362 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:15Z","lastTransitionTime":"2026-01-27T13:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.597483 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.597587 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.597646 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.597697 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.597734 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:15Z","lastTransitionTime":"2026-01-27T13:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.700828 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.700894 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.700910 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.700932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.700950 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:15Z","lastTransitionTime":"2026-01-27T13:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.803341 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.803389 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.803399 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.803415 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.803426 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:15Z","lastTransitionTime":"2026-01-27T13:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.905675 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.905740 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.905758 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.905787 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:15 crc kubenswrapper[4786]: I0127 13:08:15.905808 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:15Z","lastTransitionTime":"2026-01-27T13:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.008816 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.008887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.008909 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.008959 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.008984 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:16Z","lastTransitionTime":"2026-01-27T13:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.111440 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.111504 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.111521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.111553 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.111572 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:16Z","lastTransitionTime":"2026-01-27T13:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.213825 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.214140 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.214158 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.214175 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.214187 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:16Z","lastTransitionTime":"2026-01-27T13:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.317202 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.317249 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.317264 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.317285 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.317299 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:16Z","lastTransitionTime":"2026-01-27T13:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.419899 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.419935 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.419943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.419955 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.419963 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:16Z","lastTransitionTime":"2026-01-27T13:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.463838 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.463986 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:16 crc kubenswrapper[4786]: E0127 13:08:16.464048 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.464213 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:16 crc kubenswrapper[4786]: E0127 13:08:16.464324 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:16 crc kubenswrapper[4786]: E0127 13:08:16.464432 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.467996 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:40:09.014039185 +0000 UTC Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.522411 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.522466 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.522483 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.522508 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.522533 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:16Z","lastTransitionTime":"2026-01-27T13:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.625417 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.625485 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.625505 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.625531 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.625550 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:16Z","lastTransitionTime":"2026-01-27T13:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.728506 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.728565 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.728579 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.728628 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.728645 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:16Z","lastTransitionTime":"2026-01-27T13:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.831708 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.831753 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.831763 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.831778 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.831790 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:16Z","lastTransitionTime":"2026-01-27T13:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.934588 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.934683 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.934695 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.934717 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:16 crc kubenswrapper[4786]: I0127 13:08:16.934729 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:16Z","lastTransitionTime":"2026-01-27T13:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.037566 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.037597 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.037647 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.037668 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.037678 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:17Z","lastTransitionTime":"2026-01-27T13:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.141264 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.141322 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.141333 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.141352 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.141365 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:17Z","lastTransitionTime":"2026-01-27T13:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.244991 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.245058 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.245074 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.245130 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.245150 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:17Z","lastTransitionTime":"2026-01-27T13:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.347655 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.347738 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.347757 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.347789 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.347808 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:17Z","lastTransitionTime":"2026-01-27T13:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.450998 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.451090 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.451130 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.451174 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.451204 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:17Z","lastTransitionTime":"2026-01-27T13:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.464355 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:17 crc kubenswrapper[4786]: E0127 13:08:17.464564 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.468231 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:08:46.188525576 +0000 UTC Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.485294 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.500539 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84271e5c-079d-4cbe-8fef-c66af030561f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c929c23ed1b0a4edb128dcd3e034412d0ebc29434b3da4ad191337dfb776e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7122d0fcd9cf517570a47584791413235924c7c7192a487896c93eccfd5d8d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7122d0fcd9cf517570a47584791413235924c7c7192a487896c93eccfd5d8d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.519212 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.534026 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.555752 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.555828 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.555847 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.555878 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.555898 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:17Z","lastTransitionTime":"2026-01-27T13:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.558887 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.576530 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.591774 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.602854 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.616413 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.633367 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.648448 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.663393 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.663494 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.663555 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.663584 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.663649 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:17Z","lastTransitionTime":"2026-01-27T13:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.664718 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.679002 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.699671 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.711311 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.729326 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac0a78f8e6cfc9b374b179d250b45f9a86c225a033ed2c706b709a3007b7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:08:06Z\\\",\\\"message\\\":\\\"2026-01-27T13:07:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1a21eda6-c61f-48b2-962f-091ea4922321\\\\n2026-01-27T13:07:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1a21eda6-c61f-48b2-962f-091ea4922321 to /host/opt/cni/bin/\\\\n2026-01-27T13:07:21Z [verbose] multus-daemon started\\\\n2026-01-27T13:07:21Z [verbose] Readiness Indicator file check\\\\n2026-01-27T13:08:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.742253 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.759848 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bda5706f-b065-4472-a45c-baff1cca3c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://393d243fd50fbbc25df6261d62f7e4ddd7aee425cef5c4ccc0d8ff028cd34df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f3b3f74451aceef575e0385356b071170bdde56171fce36e1bdd5debdac37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823e47a91a33d234f34bb3e1529d300d469cab2bad4630ec73c30b9d669abe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.766483 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.766577 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.766597 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.766663 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.766690 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:17Z","lastTransitionTime":"2026-01-27T13:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.810650 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:54Z\\\",\\\"message\\\":\\\"tion, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:07:54.419132 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419138 6505 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419124 6505 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-oauth-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"fe46cb89-4e54-4175-a112-1c5224cd299e\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-oauth-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Rou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.868757 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.868806 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.868818 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.868836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.868849 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:17Z","lastTransitionTime":"2026-01-27T13:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.971647 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.971689 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.971699 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.971714 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:17 crc kubenswrapper[4786]: I0127 13:08:17.971724 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:17Z","lastTransitionTime":"2026-01-27T13:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.074207 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.074253 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.074264 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.074280 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.074291 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:18Z","lastTransitionTime":"2026-01-27T13:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.176842 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.176916 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.176929 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.176957 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.176974 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:18Z","lastTransitionTime":"2026-01-27T13:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.279102 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.279161 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.279174 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.279193 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.279205 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:18Z","lastTransitionTime":"2026-01-27T13:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.380913 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.380955 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.380965 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.380981 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.380993 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:18Z","lastTransitionTime":"2026-01-27T13:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.464375 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.464414 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.464414 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:18 crc kubenswrapper[4786]: E0127 13:08:18.464567 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:18 crc kubenswrapper[4786]: E0127 13:08:18.464678 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:18 crc kubenswrapper[4786]: E0127 13:08:18.464746 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.468381 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 05:53:24.820247964 +0000 UTC Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.483522 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.483563 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.483574 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.483590 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.483618 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:18Z","lastTransitionTime":"2026-01-27T13:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.586418 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.586474 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.586484 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.586498 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.586509 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:18Z","lastTransitionTime":"2026-01-27T13:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.688918 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.688965 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.688976 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.688994 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.689006 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:18Z","lastTransitionTime":"2026-01-27T13:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.792066 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.792123 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.792132 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.792145 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.792155 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:18Z","lastTransitionTime":"2026-01-27T13:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.894127 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.894170 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.894178 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.894193 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.894204 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:18Z","lastTransitionTime":"2026-01-27T13:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.995806 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.995863 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.995871 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.995883 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:18 crc kubenswrapper[4786]: I0127 13:08:18.995892 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:18Z","lastTransitionTime":"2026-01-27T13:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.098746 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.098796 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.098808 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.098827 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.098839 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:19Z","lastTransitionTime":"2026-01-27T13:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.201096 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.201137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.201145 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.201178 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.201187 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:19Z","lastTransitionTime":"2026-01-27T13:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.304321 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.304393 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.304413 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.304448 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.304468 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:19Z","lastTransitionTime":"2026-01-27T13:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.407821 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.407901 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.407915 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.407932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.407943 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:19Z","lastTransitionTime":"2026-01-27T13:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.465178 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:19 crc kubenswrapper[4786]: E0127 13:08:19.465776 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.465991 4786 scope.go:117] "RemoveContainer" containerID="95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.469013 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 02:15:01.998323498 +0000 UTC Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.511193 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.511474 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.511584 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.511669 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.511726 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:19Z","lastTransitionTime":"2026-01-27T13:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.614987 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.615032 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.615044 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.615066 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.615080 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:19Z","lastTransitionTime":"2026-01-27T13:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.718097 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.718153 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.718166 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.718187 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.718202 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:19Z","lastTransitionTime":"2026-01-27T13:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.821527 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.821579 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.821589 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.821639 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.821653 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:19Z","lastTransitionTime":"2026-01-27T13:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.918145 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovnkube-controller/2.log" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.921333 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerStarted","Data":"1a2a9faaefef939acbc751ed484c251a13489af87b8f9f705fa46c6dbfb003e5"} Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.921955 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.923776 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.923809 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.923825 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.923845 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.923862 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:19Z","lastTransitionTime":"2026-01-27T13:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:19 crc kubenswrapper[4786]: I0127 13:08:19.985698 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bda5706f-b065-4472-a45c-baff1cca3c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://393d243fd50fbbc25df6261d62f7e4ddd7aee425cef5c4ccc0d8ff028cd34df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f3b3f74451aceef575e0385356b071170bdde56171fce36e1bdd5debdac37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823e47a91a33d234f34bb3e1529d300d469cab2bad4630ec73c30b9d669abe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:19Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.009157 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a2a9faaefef939acbc751ed484c251a13489af87b8f9f705fa46c6dbfb003e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:54Z\\\",\\\"message\\\":\\\"tion, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:07:54.419132 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419138 6505 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419124 6505 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-oauth-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"fe46cb89-4e54-4175-a112-1c5224cd299e\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-oauth-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Rou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.026771 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.027108 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.027125 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.027135 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.027155 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.027168 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:20Z","lastTransitionTime":"2026-01-27T13:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.040205 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84271e5c-079d-4cbe-8fef-c66af030561f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c929c23ed1b0a4edb128dcd3e034412d0ebc29434b3da4ad191337dfb776e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7122d0fcd9cf517570a47584791413235924c7c7192a487896c93eccfd5d8d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7122d0fcd9cf517570a47584791413235924c7c7192a487896c93eccfd5d8d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.053246 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.069611 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.083976 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.098982 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.110509 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.121767 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.129677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.129731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.129742 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.129759 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.129769 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:20Z","lastTransitionTime":"2026-01-27T13:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.139158 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.159075 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.172696 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.184513 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.196483 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.220777 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.232424 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.232463 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.232472 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.232494 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.232507 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:20Z","lastTransitionTime":"2026-01-27T13:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.235207 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.251343 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac0a78f8e6cfc9b374b179d250b45f9a86c225a033ed2c706b709a3007b7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:08:06Z\\\",\\\"message\\\":\\\"2026-01-27T13:07:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1a21eda6-c61f-48b2-962f-091ea4922321\\\\n2026-01-27T13:07:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1a21eda6-c61f-48b2-962f-091ea4922321 to /host/opt/cni/bin/\\\\n2026-01-27T13:07:21Z [verbose] multus-daemon started\\\\n2026-01-27T13:07:21Z [verbose] Readiness Indicator file check\\\\n2026-01-27T13:08:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.264077 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.334846 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.334884 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.334893 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.334911 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.334922 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:20Z","lastTransitionTime":"2026-01-27T13:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.408093 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.408283 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.408323 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:24.408271741 +0000 UTC m=+147.618885860 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.408389 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.408438 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.408466 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.408532 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.408565 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.408586 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.408594 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.408626 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.408630 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.408666 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.408685 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.408706 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 13:09:24.408678862 +0000 UTC m=+147.619293121 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.408740 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 13:09:24.408723353 +0000 UTC m=+147.619337642 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.408769 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:09:24.408754494 +0000 UTC m=+147.619368623 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.408791 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 13:09:24.408778675 +0000 UTC m=+147.619393014 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.438132 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.438167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.438176 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.438191 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.438202 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:20Z","lastTransitionTime":"2026-01-27T13:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.464923 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.464982 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.465076 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.465256 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.464923 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.465440 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.470117 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 02:27:09.420232099 +0000 UTC Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.541993 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.542087 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.542111 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.542150 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.542175 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:20Z","lastTransitionTime":"2026-01-27T13:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.645127 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.645177 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.645189 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.645207 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.645217 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:20Z","lastTransitionTime":"2026-01-27T13:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.748805 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.748902 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.748932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.748982 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.749007 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:20Z","lastTransitionTime":"2026-01-27T13:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.852174 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.852248 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.852272 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.852310 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.852339 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:20Z","lastTransitionTime":"2026-01-27T13:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.929200 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovnkube-controller/3.log" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.930281 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovnkube-controller/2.log" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.934414 4786 generic.go:334] "Generic (PLEG): container finished" podID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerID="1a2a9faaefef939acbc751ed484c251a13489af87b8f9f705fa46c6dbfb003e5" exitCode=1 Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.934521 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerDied","Data":"1a2a9faaefef939acbc751ed484c251a13489af87b8f9f705fa46c6dbfb003e5"} Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.934679 4786 scope.go:117] "RemoveContainer" containerID="95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.935794 4786 scope.go:117] "RemoveContainer" containerID="1a2a9faaefef939acbc751ed484c251a13489af87b8f9f705fa46c6dbfb003e5" Jan 27 13:08:20 crc kubenswrapper[4786]: E0127 13:08:20.936137 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.958101 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.958149 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.958164 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.958184 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.958195 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:20Z","lastTransitionTime":"2026-01-27T13:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.962798 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a2a9faaefef939acbc751ed484c251a13489af87b8f9f705fa46c6dbfb003e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95953d20b6b9db4384a5f9b667793200b2bf76ea71791a88cd3dc1ecccb331f0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:07:54Z\\\",\\\"message\\\":\\\"tion, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:07:54Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:07:54.419132 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419138 6505 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nI0127 13:07:54.419124 6505 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-oauth-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"fe46cb89-4e54-4175-a112-1c5224cd299e\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-oauth-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Rou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a2a9faaefef939acbc751ed484c251a13489af87b8f9f705fa46c6dbfb003e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:08:20Z\\\",\\\"message\\\":\\\"459-gdk6g\\\\nI0127 13:08:20.382810 6878 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nF0127 13:08:20.382821 6878 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:08:20.382795 6878 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-prn84\\\\nI0127 13:08:20.382748 6878 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0127 13:08:20.382837 6878 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-prn84 in no\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:20 crc kubenswrapper[4786]: I0127 13:08:20.983250 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bda5706f-b065-4472-a45c-baff1cca3c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://393d243fd50fbbc25df6261d62f7e4ddd7aee425cef5c4ccc0d8ff028cd34df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f3b3f74451aceef575e0385356b071170bdde56171fce36e1bdd5debdac37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823e47a91a33d234f34bb3e1529d300d469cab2bad4630ec73c30b9d669abe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.000022 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.016645 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.043236 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.061141 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.061186 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.061199 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.061214 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.061226 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:21Z","lastTransitionTime":"2026-01-27T13:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.064868 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.083096 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84271e5c-079d-4cbe-8fef-c66af030561f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c929c23ed1b0a4edb128dcd3e034412d0ebc29434b3da4ad191337dfb776e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7122d0fcd9cf517570a47584791413235924c7c7192a487896c93eccfd5d8d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7122d0fcd9cf517570a47584791413235924c7c7192a487896c93eccfd5d8d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.100415 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.122848 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.143081 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.163421 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.163463 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.163472 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.163488 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.163497 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:21Z","lastTransitionTime":"2026-01-27T13:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.163532 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.182078 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.199085 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.217435 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.238702 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.253567 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac0a78f8e6cfc9b374b179d250b45f9a86c225a033ed2c706b709a3007b7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:08:06Z\\\",\\\"message\\\":\\\"2026-01-27T13:07:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1a21eda6-c61f-48b2-962f-091ea4922321\\\\n2026-01-27T13:07:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1a21eda6-c61f-48b2-962f-091ea4922321 to /host/opt/cni/bin/\\\\n2026-01-27T13:07:21Z [verbose] multus-daemon started\\\\n2026-01-27T13:07:21Z [verbose] Readiness Indicator file check\\\\n2026-01-27T13:08:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.265714 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.265746 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.265796 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.265810 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.265820 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:21Z","lastTransitionTime":"2026-01-27T13:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.266006 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.277272 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.305047 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.368633 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.368677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.368687 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.368702 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.368711 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:21Z","lastTransitionTime":"2026-01-27T13:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.464043 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:21 crc kubenswrapper[4786]: E0127 13:08:21.464225 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.471738 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 12:32:04.384429604 +0000 UTC Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.473724 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.473783 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.473795 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.473812 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.473824 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:21Z","lastTransitionTime":"2026-01-27T13:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.577037 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.577118 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.577128 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.577145 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.577156 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:21Z","lastTransitionTime":"2026-01-27T13:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.680155 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.680206 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.680217 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.680235 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.680247 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:21Z","lastTransitionTime":"2026-01-27T13:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.783846 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.783920 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.783938 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.783963 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.783983 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:21Z","lastTransitionTime":"2026-01-27T13:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.886370 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.886459 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.886496 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.886534 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.886557 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:21Z","lastTransitionTime":"2026-01-27T13:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.941802 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovnkube-controller/3.log" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.948758 4786 scope.go:117] "RemoveContainer" containerID="1a2a9faaefef939acbc751ed484c251a13489af87b8f9f705fa46c6dbfb003e5" Jan 27 13:08:21 crc kubenswrapper[4786]: E0127 13:08:21.949040 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.971876 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bda5706f-b065-4472-a45c-baff1cca3c3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://393d243fd50fbbc25df6261d62f7e4ddd7aee425cef5c4ccc0d8ff028cd34df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f3b3f74451aceef575e0385356b071170bdde56171fce36e1bdd5debdac37c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823e47a91a33d234f34bb3e1529d300d469cab2bad4630ec73c30b9d669abe06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279be35df94d0bffd8ba166fd9774277663989daf60d2bb656fd57fc7e9534ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.990085 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.990135 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.990150 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.990175 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:21 crc kubenswrapper[4786]: I0127 13:08:21.990192 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:21Z","lastTransitionTime":"2026-01-27T13:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.004230 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a2a9faaefef939acbc751ed484c251a13489af87b8f9f705fa46c6dbfb003e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a2a9faaefef939acbc751ed484c251a13489af87b8f9f705fa46c6dbfb003e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:08:20Z\\\",\\\"message\\\":\\\"459-gdk6g\\\\nI0127 13:08:20.382810 6878 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-bh896\\\\nF0127 13:08:20.382821 6878 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:20Z is after 2025-08-24T17:21:41Z]\\\\nI0127 13:08:20.382795 6878 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-prn84\\\\nI0127 13:08:20.382748 6878 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0127 13:08:20.382837 6878 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-prn84 in no\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:08:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5rgf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6d56q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.025837 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-prn84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d153375a-777b-4331-992a-81c845c6d6eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f66f1182d0247b3a8413ae9ea30f8a3308dbe1a112c01473bae806c906f7383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f787ba2f52da09e5eddba36b73b46fa7144e28b570196f9e22801913352103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a709a6bb9db6f8a5c1f358f7bf6299f17ccd8155452bc45efe9835416f2f0ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eec0b7ccd5507f7553d77d0203deb5529f9f34976fff2e183421ab20b88b5ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a786dd39081e574554645a1c9f3b1b510cc89da043d6d2692073e94ed5327c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f2faafcad4911d890e063df8ce28409ab6921ff2d04836151a2792d741fad21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1257f4d9d2cf274bc81506df1936137ee0fac37efd1622e5bc361d7cccb6023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gj2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-prn84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.044221 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c6a2646-52f7-41be-8a81-3fed6eac75cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93f091073d8f9193f660e52b257ca0aba0a3b4efff7d6d8f1ee5ed0e4166162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phqbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7bxtk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.059500 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84271e5c-079d-4cbe-8fef-c66af030561f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c929c23ed1b0a4edb128dcd3e034412d0ebc29434b3da4ad191337dfb776e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7122d0fcd9cf517570a47584791413235924c7c7192a487896c93eccfd5d8d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7122d0fcd9cf517570a47584791413235924c7c7192a487896c93eccfd5d8d4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.081734 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a9a6f6ed65f430774bf05c88a99889462a10a5cdcc0b70fcbaccd4cfec9b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.092438 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.092494 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.092503 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.092520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.092529 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:22Z","lastTransitionTime":"2026-01-27T13:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.099677 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2ba35bdc2ffd3bc68ad3a19dbcd16b1b04e4e476958651146cf3ebbacf81570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.120725 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.141093 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.155454 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bsqnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcd62799-0a3c-4682-acb6-a2bc9121b391\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27f958678979f9bc086a378e90e5c6c8fc466d1c52e58bcc042ff72fafb7a4d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bsqnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.169658 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8jf77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fswjw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8jf77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.183276 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8573a07-5c6b-490a-abd2-e38fe66ef4f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"light.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 13:07:16.073073 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 13:07:16.083509 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 13:07:16.083537 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083542 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 13:07:16.083546 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 13:07:16.083549 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 13:07:16.083552 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 13:07:16.083555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 13:07:16.083684 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0127 13:07:16.094546 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-572751212/tls.crt::/tmp/serving-cert-572751212/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769519220\\\\\\\\\\\\\\\" (2026-01-27 13:07:00 +0000 UTC to 2026-02-26 13:07:01 +0000 UTC (now=2026-01-27 13:07:16.094502626 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094907 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769519221\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769519221\\\\\\\\\\\\\\\" (2026-01-27 12:07:01 +0000 UTC to 2027-01-27 12:07:01 +0000 UTC (now=2026-01-27 13:07:16.094841616 +0000 UTC))\\\\\\\"\\\\nI0127 13:07:16.094999 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nF0127 13:07:16.095038 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.195519 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.195778 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.195821 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.195839 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.195850 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:22Z","lastTransitionTime":"2026-01-27T13:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.199903 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9ee4772-2b03-404a-9de5-d98dcb431f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e6f25469fce6943fbdd305039e3ed4bbf9694eb3e485d9a7556a708a1434f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30779fa178428679a956a66ac9e5f5729db30b3aa0f0c5930f224cf140c5703f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77096ca9ca40766ea3f6ef7e915698ff33f824ba2bcfc0bb2c5b9b2eb64f21f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.213350 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.224446 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bh896" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22e89788-1c8e-46a7-97b4-3981352bb420\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3025c216a87acd5041f5dea80529063ca66929e021eadd55f17145fe1ae0a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dl86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bh896\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.238389 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbce2942-82d4-4c43-b41d-20c66d2b0be0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ef31e49a0cd3198de7ca02721a5731f4ed6283e79697e4f2b0eb2bd00712eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd2b4623ac1917ef8bd9d422064f7090f88a1ae36158d508d314c46b6d3506a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-722rx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.261864 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67692efe-6dd1-450c-9c0c-a16b104376b8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd5d21270850bfee1e52c235f542505dc4b8129379b8ef932ed1d59c599abee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b35278bed2eea01653cf49906c6a1f3447c33d1b80963a3dd10e07a854ec57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9284ead95a7669e31809ac214e2297884715a760a2eb4984e404c059ff2b6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8aec363d5b95eaca7107351edfd1ae381e714391a4db22a72a30d0250534254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9a9fd35c4ee52eeffce0566113373c6d70b3ccadc124e7d6a4b8c7ba1d21a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec011f24f4ef3a383ea2bc9f6b033d91495f3c5e373389f6d69638f453dce2e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901823c91467661a057d6fd4a238f9077c0aa52f7caf4eec4d4b6411186e2117\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:06:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:06:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dbfe6bea71e502e14dcd1a9b58e547ccb33d8d985a634ff230518d6c6beb128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T13:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:06:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.277263 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d8f966fc9486cb42a28164dd56ec608330a6697d5e02318f78f9dfce3d79aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225905e87cce3c64d456d8b5e1bb4a58800fc55d59fcb9e98ef7523f1c8228f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.294027 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9q6dk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a290f38c-b94c-4233-9d98-9a54a728cedb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac0a78f8e6cfc9b374b179d250b45f9a86c225a033ed2c706b709a3007b7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T13:08:06Z\\\",\\\"message\\\":\\\"2026-01-27T13:07:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1a21eda6-c61f-48b2-962f-091ea4922321\\\\n2026-01-27T13:07:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1a21eda6-c61f-48b2-962f-091ea4922321 to /host/opt/cni/bin/\\\\n2026-01-27T13:07:21Z [verbose] multus-daemon started\\\\n2026-01-27T13:07:21Z [verbose] Readiness Indicator file check\\\\n2026-01-27T13:08:06Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T13:07:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T13:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5xht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T13:07:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9q6dk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.298853 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.298890 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.298900 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.298917 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.298930 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:22Z","lastTransitionTime":"2026-01-27T13:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.401542 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.401596 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.401636 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.401656 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.401679 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:22Z","lastTransitionTime":"2026-01-27T13:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.464805 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.464912 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:22 crc kubenswrapper[4786]: E0127 13:08:22.464968 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.465022 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:22 crc kubenswrapper[4786]: E0127 13:08:22.465193 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:22 crc kubenswrapper[4786]: E0127 13:08:22.465290 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.472866 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 04:40:57.924074714 +0000 UTC Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.503915 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.503954 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.503978 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.504001 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.504016 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:22Z","lastTransitionTime":"2026-01-27T13:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.606806 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.606868 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.606885 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.606908 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.606926 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:22Z","lastTransitionTime":"2026-01-27T13:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.709494 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.709590 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.709654 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.709682 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.709699 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:22Z","lastTransitionTime":"2026-01-27T13:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.812980 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.813041 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.813061 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.813086 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.813104 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:22Z","lastTransitionTime":"2026-01-27T13:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.916302 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.916341 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.916349 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.916362 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:22 crc kubenswrapper[4786]: I0127 13:08:22.916375 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:22Z","lastTransitionTime":"2026-01-27T13:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.020106 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.020157 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.020171 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.020190 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.020204 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:23Z","lastTransitionTime":"2026-01-27T13:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.123751 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.123797 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.123810 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.123829 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.123845 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:23Z","lastTransitionTime":"2026-01-27T13:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.226367 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.226424 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.226441 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.226462 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.226476 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:23Z","lastTransitionTime":"2026-01-27T13:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.329360 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.329403 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.329413 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.329428 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.329439 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:23Z","lastTransitionTime":"2026-01-27T13:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.432476 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.432525 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.432538 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.432556 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.432568 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:23Z","lastTransitionTime":"2026-01-27T13:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.464966 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:23 crc kubenswrapper[4786]: E0127 13:08:23.465244 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.473672 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:48:43.153424698 +0000 UTC Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.535057 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.535109 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.535118 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.535133 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.535144 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:23Z","lastTransitionTime":"2026-01-27T13:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.637173 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.637224 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.637239 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.637264 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.637279 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:23Z","lastTransitionTime":"2026-01-27T13:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.741344 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.741447 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.741503 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.741529 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.741572 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:23Z","lastTransitionTime":"2026-01-27T13:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.843566 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.843629 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.843639 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.843652 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.843660 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:23Z","lastTransitionTime":"2026-01-27T13:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.946126 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.946177 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.946195 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.946213 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:23 crc kubenswrapper[4786]: I0127 13:08:23.946226 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:23Z","lastTransitionTime":"2026-01-27T13:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.049365 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.049432 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.049449 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.049474 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.049493 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:24Z","lastTransitionTime":"2026-01-27T13:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.152741 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.152835 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.152860 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.152891 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.152915 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:24Z","lastTransitionTime":"2026-01-27T13:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.217390 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.217469 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.217493 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.217525 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.217549 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:24Z","lastTransitionTime":"2026-01-27T13:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:24 crc kubenswrapper[4786]: E0127 13:08:24.239027 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.244102 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.244164 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.244190 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.244220 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.244243 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:24Z","lastTransitionTime":"2026-01-27T13:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:24 crc kubenswrapper[4786]: E0127 13:08:24.257993 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.262878 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.262923 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.262932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.262947 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.262959 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:24Z","lastTransitionTime":"2026-01-27T13:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:24 crc kubenswrapper[4786]: E0127 13:08:24.281933 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.286664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.286728 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.286747 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.286796 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.286814 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:24Z","lastTransitionTime":"2026-01-27T13:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:24 crc kubenswrapper[4786]: E0127 13:08:24.313231 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.321073 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.321127 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.321144 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.321171 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.321191 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:24Z","lastTransitionTime":"2026-01-27T13:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:24 crc kubenswrapper[4786]: E0127 13:08:24.336837 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T13:08:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18042af8-71e3-4882-b2ca-158fe4a2012f\\\",\\\"systemUUID\\\":\\\"56795cdc-7796-46ae-b42e-edbe6c464279\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T13:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 13:08:24 crc kubenswrapper[4786]: E0127 13:08:24.336957 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.338973 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.339030 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.339048 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.339071 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.339090 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:24Z","lastTransitionTime":"2026-01-27T13:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.441464 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.441525 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.441542 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.441569 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.441591 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:24Z","lastTransitionTime":"2026-01-27T13:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.464782 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.464854 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:24 crc kubenswrapper[4786]: E0127 13:08:24.464923 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.464961 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:24 crc kubenswrapper[4786]: E0127 13:08:24.465121 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:24 crc kubenswrapper[4786]: E0127 13:08:24.465201 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.474083 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 09:35:07.256023357 +0000 UTC Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.544448 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.544493 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.544504 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.544521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.544533 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:24Z","lastTransitionTime":"2026-01-27T13:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.648164 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.648220 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.648235 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.648257 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.648269 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:24Z","lastTransitionTime":"2026-01-27T13:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.750783 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.750872 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.750911 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.750947 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.750975 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:24Z","lastTransitionTime":"2026-01-27T13:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.853443 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.853499 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.853512 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.853532 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.853543 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:24Z","lastTransitionTime":"2026-01-27T13:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.956383 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.956440 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.956448 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.956463 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:24 crc kubenswrapper[4786]: I0127 13:08:24.956491 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:24Z","lastTransitionTime":"2026-01-27T13:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.058734 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.058764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.058772 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.058784 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.058792 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:25Z","lastTransitionTime":"2026-01-27T13:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.160769 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.160802 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.160811 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.160823 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.160832 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:25Z","lastTransitionTime":"2026-01-27T13:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.263376 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.263427 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.263440 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.263453 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.263464 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:25Z","lastTransitionTime":"2026-01-27T13:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.366377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.366416 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.366633 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.366650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.366663 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:25Z","lastTransitionTime":"2026-01-27T13:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.464866 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:25 crc kubenswrapper[4786]: E0127 13:08:25.465156 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.468053 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.468084 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.468091 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.468102 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.468110 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:25Z","lastTransitionTime":"2026-01-27T13:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.474418 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:50:26.526138726 +0000 UTC Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.570837 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.570864 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.570872 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.570884 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.570892 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:25Z","lastTransitionTime":"2026-01-27T13:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.672736 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.672765 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.672773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.672786 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.672795 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:25Z","lastTransitionTime":"2026-01-27T13:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.775796 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.775834 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.775843 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.775856 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.775864 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:25Z","lastTransitionTime":"2026-01-27T13:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.878561 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.878642 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.878654 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.878668 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.878676 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:25Z","lastTransitionTime":"2026-01-27T13:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.981259 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.981299 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.981308 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.981322 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:25 crc kubenswrapper[4786]: I0127 13:08:25.981333 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:25Z","lastTransitionTime":"2026-01-27T13:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.083717 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.083760 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.083793 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.083811 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.083823 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:26Z","lastTransitionTime":"2026-01-27T13:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.186412 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.186458 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.186467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.186480 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.186491 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:26Z","lastTransitionTime":"2026-01-27T13:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.288643 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.288679 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.288689 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.288703 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.288712 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:26Z","lastTransitionTime":"2026-01-27T13:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.391264 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.391300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.391310 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.391324 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.391332 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:26Z","lastTransitionTime":"2026-01-27T13:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.464394 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.464390 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:26 crc kubenswrapper[4786]: E0127 13:08:26.464540 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:26 crc kubenswrapper[4786]: E0127 13:08:26.464577 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.464408 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:26 crc kubenswrapper[4786]: E0127 13:08:26.464670 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.475512 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:54:21.106445185 +0000 UTC Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.493100 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.493193 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.493212 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.493234 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.493248 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:26Z","lastTransitionTime":"2026-01-27T13:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.595494 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.595539 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.595548 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.595566 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.595578 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:26Z","lastTransitionTime":"2026-01-27T13:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.697976 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.698057 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.698070 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.698090 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.698103 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:26Z","lastTransitionTime":"2026-01-27T13:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.800266 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.800297 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.800305 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.800320 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.800328 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:26Z","lastTransitionTime":"2026-01-27T13:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.903087 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.903128 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.903140 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.903156 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:26 crc kubenswrapper[4786]: I0127 13:08:26.903169 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:26Z","lastTransitionTime":"2026-01-27T13:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.005226 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.005269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.005279 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.005295 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.005305 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:27Z","lastTransitionTime":"2026-01-27T13:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.107129 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.107164 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.107175 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.107189 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.107199 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:27Z","lastTransitionTime":"2026-01-27T13:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.209232 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.209263 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.209271 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.209284 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.209292 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:27Z","lastTransitionTime":"2026-01-27T13:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.312150 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.312210 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.312231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.312255 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.312274 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:27Z","lastTransitionTime":"2026-01-27T13:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.416004 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.416040 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.416048 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.416061 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.416072 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:27Z","lastTransitionTime":"2026-01-27T13:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.464595 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:27 crc kubenswrapper[4786]: E0127 13:08:27.464748 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.476665 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:14:56.081351571 +0000 UTC Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.513435 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=67.513414753 podStartE2EDuration="1m7.513414753s" podCreationTimestamp="2026-01-27 13:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:08:27.51259678 +0000 UTC m=+90.723210909" watchObservedRunningTime="2026-01-27 13:08:27.513414753 +0000 UTC m=+90.724028872" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.513653 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rgrth" podStartSLOduration=69.51364541 podStartE2EDuration="1m9.51364541s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:08:27.491821364 +0000 UTC m=+90.702435483" watchObservedRunningTime="2026-01-27 13:08:27.51364541 +0000 UTC m=+90.724259549" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.518986 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.519020 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.519028 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.519041 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.519051 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:27Z","lastTransitionTime":"2026-01-27T13:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.558009 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9q6dk" podStartSLOduration=69.55798619 podStartE2EDuration="1m9.55798619s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:08:27.544720201 +0000 UTC m=+90.755334320" watchObservedRunningTime="2026-01-27 13:08:27.55798619 +0000 UTC m=+90.768600309" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.581926 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bh896" podStartSLOduration=70.581910473 podStartE2EDuration="1m10.581910473s" podCreationTimestamp="2026-01-27 13:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:08:27.559486661 +0000 UTC m=+90.770100770" watchObservedRunningTime="2026-01-27 13:08:27.581910473 +0000 UTC m=+90.792524592" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.607736 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.607715988 podStartE2EDuration="43.607715988s" podCreationTimestamp="2026-01-27 13:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:08:27.58253319 +0000 UTC m=+90.793147309" watchObservedRunningTime="2026-01-27 13:08:27.607715988 +0000 UTC m=+90.818330097" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.620951 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podStartSLOduration=70.620928165 podStartE2EDuration="1m10.620928165s" podCreationTimestamp="2026-01-27 13:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:08:27.620524294 +0000 UTC m=+90.831138423" watchObservedRunningTime="2026-01-27 13:08:27.620928165 +0000 UTC m=+90.831542284" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.621438 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.621487 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.621501 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.621520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.621532 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:27Z","lastTransitionTime":"2026-01-27T13:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.631982 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=14.631963031 podStartE2EDuration="14.631963031s" podCreationTimestamp="2026-01-27 13:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:08:27.631301413 +0000 UTC m=+90.841915532" watchObservedRunningTime="2026-01-27 13:08:27.631963031 +0000 UTC m=+90.842577150" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.681816 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-prn84" podStartSLOduration=69.681799424 podStartE2EDuration="1m9.681799424s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:08:27.670174872 +0000 UTC m=+90.880788991" watchObservedRunningTime="2026-01-27 13:08:27.681799424 +0000 UTC m=+90.892413543" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.705413 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bsqnf" podStartSLOduration=70.705397598 podStartE2EDuration="1m10.705397598s" podCreationTimestamp="2026-01-27 13:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:08:27.694332351 +0000 UTC m=+90.904946470" watchObservedRunningTime="2026-01-27 13:08:27.705397598 +0000 UTC m=+90.916011717" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.724131 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.724173 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.724182 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.724196 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.724205 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:27Z","lastTransitionTime":"2026-01-27T13:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.732440 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.732417758 podStartE2EDuration="1m8.732417758s" podCreationTimestamp="2026-01-27 13:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:08:27.731689538 +0000 UTC m=+90.942303667" watchObservedRunningTime="2026-01-27 13:08:27.732417758 +0000 UTC m=+90.943031877" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.733250 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.733241011 podStartE2EDuration="1m11.733241011s" podCreationTimestamp="2026-01-27 13:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:08:27.718027689 +0000 UTC m=+90.928641808" watchObservedRunningTime="2026-01-27 13:08:27.733241011 +0000 UTC m=+90.943855130" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.827121 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.827177 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.827186 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.827204 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.827213 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:27Z","lastTransitionTime":"2026-01-27T13:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.929506 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.929548 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.929558 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.929617 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:27 crc kubenswrapper[4786]: I0127 13:08:27.929631 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:27Z","lastTransitionTime":"2026-01-27T13:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.032069 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.032102 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.032112 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.032126 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.032136 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:28Z","lastTransitionTime":"2026-01-27T13:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.137928 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.138372 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.138388 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.138412 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.138428 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:28Z","lastTransitionTime":"2026-01-27T13:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.241182 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.241237 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.241250 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.241268 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.241278 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:28Z","lastTransitionTime":"2026-01-27T13:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.343986 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.344024 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.344032 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.344044 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.344053 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:28Z","lastTransitionTime":"2026-01-27T13:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.446266 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.446302 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.446311 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.446324 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.446333 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:28Z","lastTransitionTime":"2026-01-27T13:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.464739 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.464781 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.464796 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:28 crc kubenswrapper[4786]: E0127 13:08:28.464956 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:28 crc kubenswrapper[4786]: E0127 13:08:28.465030 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:28 crc kubenswrapper[4786]: E0127 13:08:28.465174 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.476995 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 18:37:05.080843126 +0000 UTC Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.548655 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.548691 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.548703 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.548718 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.548733 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:28Z","lastTransitionTime":"2026-01-27T13:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.651228 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.651272 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.651283 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.651301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.651313 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:28Z","lastTransitionTime":"2026-01-27T13:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.753124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.753159 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.753167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.753179 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.753188 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:28Z","lastTransitionTime":"2026-01-27T13:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.855992 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.856039 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.856049 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.856067 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.856078 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:28Z","lastTransitionTime":"2026-01-27T13:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.958548 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.958600 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.958624 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.958638 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:28 crc kubenswrapper[4786]: I0127 13:08:28.958647 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:28Z","lastTransitionTime":"2026-01-27T13:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.060551 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.060643 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.060667 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.060697 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.060722 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:29Z","lastTransitionTime":"2026-01-27T13:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.163003 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.163046 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.163060 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.163081 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.163092 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:29Z","lastTransitionTime":"2026-01-27T13:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.265092 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.265136 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.265147 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.265161 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.265171 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:29Z","lastTransitionTime":"2026-01-27T13:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.367270 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.367317 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.367329 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.367348 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.367361 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:29Z","lastTransitionTime":"2026-01-27T13:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.464080 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:29 crc kubenswrapper[4786]: E0127 13:08:29.464211 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.469241 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.469277 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.469287 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.469302 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.469314 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:29Z","lastTransitionTime":"2026-01-27T13:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.477660 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:47:43.290951975 +0000 UTC Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.572028 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.572063 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.572075 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.572091 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.572102 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:29Z","lastTransitionTime":"2026-01-27T13:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.675270 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.675317 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.675326 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.675339 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.675349 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:29Z","lastTransitionTime":"2026-01-27T13:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.777383 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.777438 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.777450 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.777468 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.777482 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:29Z","lastTransitionTime":"2026-01-27T13:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.881085 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.881152 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.881175 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.881200 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.881216 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:29Z","lastTransitionTime":"2026-01-27T13:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.983881 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.984008 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.984029 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.984054 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:29 crc kubenswrapper[4786]: I0127 13:08:29.984073 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:29Z","lastTransitionTime":"2026-01-27T13:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.086073 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.086127 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.086136 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.086150 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.086159 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:30Z","lastTransitionTime":"2026-01-27T13:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.188323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.188357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.188383 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.188395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.188408 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:30Z","lastTransitionTime":"2026-01-27T13:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.290577 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.290644 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.290653 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.290665 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.290673 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:30Z","lastTransitionTime":"2026-01-27T13:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.393143 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.393185 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.393193 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.393207 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.393217 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:30Z","lastTransitionTime":"2026-01-27T13:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.464200 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.464296 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.464200 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:30 crc kubenswrapper[4786]: E0127 13:08:30.464427 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:30 crc kubenswrapper[4786]: E0127 13:08:30.464539 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:30 crc kubenswrapper[4786]: E0127 13:08:30.464653 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.478486 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:00:27.325832195 +0000 UTC Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.495194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.495241 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.495254 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.495268 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.495277 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:30Z","lastTransitionTime":"2026-01-27T13:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.597949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.597990 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.598002 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.598018 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.598031 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:30Z","lastTransitionTime":"2026-01-27T13:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.700261 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.700305 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.700317 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.700335 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.700346 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:30Z","lastTransitionTime":"2026-01-27T13:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.801999 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.802033 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.802041 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.802056 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.802066 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:30Z","lastTransitionTime":"2026-01-27T13:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.903487 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.903515 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.903524 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.903538 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:30 crc kubenswrapper[4786]: I0127 13:08:30.903548 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:30Z","lastTransitionTime":"2026-01-27T13:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.006004 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.006047 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.006055 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.006069 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.006080 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:31Z","lastTransitionTime":"2026-01-27T13:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.112438 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.112490 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.112500 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.112516 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.112534 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:31Z","lastTransitionTime":"2026-01-27T13:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.215510 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.215557 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.215569 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.215586 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.215616 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:31Z","lastTransitionTime":"2026-01-27T13:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.319167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.319213 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.319221 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.319236 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.319246 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:31Z","lastTransitionTime":"2026-01-27T13:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.420855 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.420890 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.420898 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.420911 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.420920 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:31Z","lastTransitionTime":"2026-01-27T13:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.464861 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:31 crc kubenswrapper[4786]: E0127 13:08:31.465092 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.479547 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 18:31:29.020966621 +0000 UTC Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.523503 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.523537 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.523546 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.523561 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.523571 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:31Z","lastTransitionTime":"2026-01-27T13:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.625986 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.626017 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.626024 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.626038 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.626047 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:31Z","lastTransitionTime":"2026-01-27T13:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.727825 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.727869 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.727880 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.727896 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.727906 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:31Z","lastTransitionTime":"2026-01-27T13:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.829865 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.829906 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.829916 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.829931 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.829942 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:31Z","lastTransitionTime":"2026-01-27T13:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.932534 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.932571 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.932581 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.932595 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:31 crc kubenswrapper[4786]: I0127 13:08:31.932623 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:31Z","lastTransitionTime":"2026-01-27T13:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.035509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.035555 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.035566 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.035582 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.035595 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:32Z","lastTransitionTime":"2026-01-27T13:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.138408 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.138520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.138541 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.138563 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.138576 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:32Z","lastTransitionTime":"2026-01-27T13:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.240801 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.240866 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.240875 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.240891 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.240899 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:32Z","lastTransitionTime":"2026-01-27T13:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.343981 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.344045 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.344056 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.344075 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.344084 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:32Z","lastTransitionTime":"2026-01-27T13:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.446588 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.446646 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.446657 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.446673 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.446684 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:32Z","lastTransitionTime":"2026-01-27T13:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.464236 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.464272 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.464337 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:32 crc kubenswrapper[4786]: E0127 13:08:32.464406 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:32 crc kubenswrapper[4786]: E0127 13:08:32.464551 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:32 crc kubenswrapper[4786]: E0127 13:08:32.464665 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.480443 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 04:14:34.770799064 +0000 UTC Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.548998 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.549072 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.549089 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.549112 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.549130 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:32Z","lastTransitionTime":"2026-01-27T13:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.652107 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.652149 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.652160 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.652176 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.652190 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:32Z","lastTransitionTime":"2026-01-27T13:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.754080 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.754119 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.754128 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.754142 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.754152 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:32Z","lastTransitionTime":"2026-01-27T13:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.857447 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.857494 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.857506 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.857529 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.857543 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:32Z","lastTransitionTime":"2026-01-27T13:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.961124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.961463 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.961480 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.961501 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:32 crc kubenswrapper[4786]: I0127 13:08:32.961518 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:32Z","lastTransitionTime":"2026-01-27T13:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.065076 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.065131 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.065139 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.065155 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.065165 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:33Z","lastTransitionTime":"2026-01-27T13:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.167347 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.167391 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.167399 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.167414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.167424 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:33Z","lastTransitionTime":"2026-01-27T13:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.270276 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.270307 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.270315 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.270327 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.270335 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:33Z","lastTransitionTime":"2026-01-27T13:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.372766 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.372808 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.372821 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.372836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.372845 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:33Z","lastTransitionTime":"2026-01-27T13:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.464072 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:33 crc kubenswrapper[4786]: E0127 13:08:33.464234 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.474514 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.474551 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.474560 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.474571 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.474580 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:33Z","lastTransitionTime":"2026-01-27T13:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.480831 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:14:09.237619706 +0000 UTC Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.577185 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.577234 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.577248 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.577260 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.577271 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:33Z","lastTransitionTime":"2026-01-27T13:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.679627 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.679665 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.679677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.679693 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.679706 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:33Z","lastTransitionTime":"2026-01-27T13:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.782237 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.782275 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.782286 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.782299 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.782307 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:33Z","lastTransitionTime":"2026-01-27T13:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.884296 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.884323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.884331 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.884342 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.884352 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:33Z","lastTransitionTime":"2026-01-27T13:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.986868 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.986916 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.986934 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.986956 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:33 crc kubenswrapper[4786]: I0127 13:08:33.986974 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:33Z","lastTransitionTime":"2026-01-27T13:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.093640 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.093692 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.093708 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.093730 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.093745 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:34Z","lastTransitionTime":"2026-01-27T13:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.196233 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.196268 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.196279 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.196291 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.196299 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:34Z","lastTransitionTime":"2026-01-27T13:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.299141 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.299193 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.299205 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.299221 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.299233 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:34Z","lastTransitionTime":"2026-01-27T13:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.401042 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.401365 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.401462 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.401562 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.401692 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:34Z","lastTransitionTime":"2026-01-27T13:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.463993 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:34 crc kubenswrapper[4786]: E0127 13:08:34.464469 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.464110 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.464094 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:34 crc kubenswrapper[4786]: E0127 13:08:34.464975 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:34 crc kubenswrapper[4786]: E0127 13:08:34.464850 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.481085 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 10:51:32.696955731 +0000 UTC Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.504008 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.504302 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.504471 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.504593 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.504777 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:34Z","lastTransitionTime":"2026-01-27T13:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.607419 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.607699 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.607780 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.607884 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.607972 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:34Z","lastTransitionTime":"2026-01-27T13:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.687022 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.687281 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.687406 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.687501 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.687650 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T13:08:34Z","lastTransitionTime":"2026-01-27T13:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.726236 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb"] Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.726788 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.728703 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.728924 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.728987 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.732035 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.850692 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/974e7163-0699-4d67-b223-9edfea618f8c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-69cfb\" (UID: \"974e7163-0699-4d67-b223-9edfea618f8c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.850748 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/974e7163-0699-4d67-b223-9edfea618f8c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-69cfb\" (UID: \"974e7163-0699-4d67-b223-9edfea618f8c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.850778 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/974e7163-0699-4d67-b223-9edfea618f8c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-69cfb\" (UID: \"974e7163-0699-4d67-b223-9edfea618f8c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.850811 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/974e7163-0699-4d67-b223-9edfea618f8c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-69cfb\" (UID: \"974e7163-0699-4d67-b223-9edfea618f8c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.850875 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/974e7163-0699-4d67-b223-9edfea618f8c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-69cfb\" (UID: \"974e7163-0699-4d67-b223-9edfea618f8c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.951924 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/974e7163-0699-4d67-b223-9edfea618f8c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-69cfb\" (UID: \"974e7163-0699-4d67-b223-9edfea618f8c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.951969 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/974e7163-0699-4d67-b223-9edfea618f8c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-69cfb\" (UID: \"974e7163-0699-4d67-b223-9edfea618f8c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.951986 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/974e7163-0699-4d67-b223-9edfea618f8c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-69cfb\" (UID: \"974e7163-0699-4d67-b223-9edfea618f8c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.952010 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/974e7163-0699-4d67-b223-9edfea618f8c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-69cfb\" (UID: \"974e7163-0699-4d67-b223-9edfea618f8c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.952028 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/974e7163-0699-4d67-b223-9edfea618f8c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-69cfb\" (UID: \"974e7163-0699-4d67-b223-9edfea618f8c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.952083 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/974e7163-0699-4d67-b223-9edfea618f8c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-69cfb\" (UID: \"974e7163-0699-4d67-b223-9edfea618f8c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.952128 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/974e7163-0699-4d67-b223-9edfea618f8c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-69cfb\" (UID: \"974e7163-0699-4d67-b223-9edfea618f8c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.952886 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/974e7163-0699-4d67-b223-9edfea618f8c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-69cfb\" (UID: \"974e7163-0699-4d67-b223-9edfea618f8c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.964089 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/974e7163-0699-4d67-b223-9edfea618f8c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-69cfb\" (UID: \"974e7163-0699-4d67-b223-9edfea618f8c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:34 crc kubenswrapper[4786]: I0127 13:08:34.967490 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/974e7163-0699-4d67-b223-9edfea618f8c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-69cfb\" (UID: \"974e7163-0699-4d67-b223-9edfea618f8c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:35 crc kubenswrapper[4786]: I0127 13:08:35.040921 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" Jan 27 13:08:35 crc kubenswrapper[4786]: I0127 13:08:35.464934 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:35 crc kubenswrapper[4786]: E0127 13:08:35.465196 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:35 crc kubenswrapper[4786]: I0127 13:08:35.482109 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:19:49.088721601 +0000 UTC Jan 27 13:08:35 crc kubenswrapper[4786]: I0127 13:08:35.482153 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 13:08:35 crc kubenswrapper[4786]: I0127 13:08:35.490983 4786 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 13:08:35 crc kubenswrapper[4786]: I0127 13:08:35.994253 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" event={"ID":"974e7163-0699-4d67-b223-9edfea618f8c","Type":"ContainerStarted","Data":"26e60a7634f6a214df1bdbbc46381be1c4bc79d7349d4cf853f2f50942bacdd7"} Jan 27 13:08:35 crc kubenswrapper[4786]: I0127 13:08:35.994305 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" event={"ID":"974e7163-0699-4d67-b223-9edfea618f8c","Type":"ContainerStarted","Data":"4475c7194ec211265538f96aa4cd5b0c7fef858f818f4a8f0eafe7e1b3f30160"} Jan 27 13:08:36 crc kubenswrapper[4786]: I0127 13:08:36.164740 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs\") pod \"network-metrics-daemon-8jf77\" (UID: \"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\") " pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:36 crc kubenswrapper[4786]: E0127 13:08:36.164881 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:08:36 crc kubenswrapper[4786]: E0127 13:08:36.164925 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs podName:a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496 nodeName:}" failed. No retries permitted until 2026-01-27 13:09:40.164912589 +0000 UTC m=+163.375526708 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs") pod "network-metrics-daemon-8jf77" (UID: "a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 13:08:36 crc kubenswrapper[4786]: I0127 13:08:36.464034 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:36 crc kubenswrapper[4786]: I0127 13:08:36.464035 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:36 crc kubenswrapper[4786]: I0127 13:08:36.464179 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:36 crc kubenswrapper[4786]: E0127 13:08:36.464295 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:36 crc kubenswrapper[4786]: E0127 13:08:36.464939 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:36 crc kubenswrapper[4786]: E0127 13:08:36.465036 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:36 crc kubenswrapper[4786]: I0127 13:08:36.465466 4786 scope.go:117] "RemoveContainer" containerID="1a2a9faaefef939acbc751ed484c251a13489af87b8f9f705fa46c6dbfb003e5" Jan 27 13:08:36 crc kubenswrapper[4786]: E0127 13:08:36.465793 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" Jan 27 13:08:37 crc kubenswrapper[4786]: I0127 13:08:37.463840 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:37 crc kubenswrapper[4786]: E0127 13:08:37.465389 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:38 crc kubenswrapper[4786]: I0127 13:08:38.464841 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:38 crc kubenswrapper[4786]: E0127 13:08:38.465222 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:38 crc kubenswrapper[4786]: I0127 13:08:38.465017 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:38 crc kubenswrapper[4786]: I0127 13:08:38.465297 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:38 crc kubenswrapper[4786]: E0127 13:08:38.465388 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:38 crc kubenswrapper[4786]: E0127 13:08:38.465438 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:39 crc kubenswrapper[4786]: I0127 13:08:39.464711 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:39 crc kubenswrapper[4786]: E0127 13:08:39.464861 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:40 crc kubenswrapper[4786]: I0127 13:08:40.463839 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:40 crc kubenswrapper[4786]: I0127 13:08:40.463873 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:40 crc kubenswrapper[4786]: I0127 13:08:40.463910 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:40 crc kubenswrapper[4786]: E0127 13:08:40.464022 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:40 crc kubenswrapper[4786]: E0127 13:08:40.464096 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:40 crc kubenswrapper[4786]: E0127 13:08:40.464159 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:41 crc kubenswrapper[4786]: I0127 13:08:41.464477 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:41 crc kubenswrapper[4786]: E0127 13:08:41.464754 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:42 crc kubenswrapper[4786]: I0127 13:08:42.464320 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:42 crc kubenswrapper[4786]: I0127 13:08:42.464417 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:42 crc kubenswrapper[4786]: I0127 13:08:42.464583 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:42 crc kubenswrapper[4786]: E0127 13:08:42.464707 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:42 crc kubenswrapper[4786]: E0127 13:08:42.465135 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:42 crc kubenswrapper[4786]: E0127 13:08:42.465266 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:43 crc kubenswrapper[4786]: I0127 13:08:43.464551 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:43 crc kubenswrapper[4786]: E0127 13:08:43.464743 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:44 crc kubenswrapper[4786]: I0127 13:08:44.464478 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:44 crc kubenswrapper[4786]: I0127 13:08:44.464512 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:44 crc kubenswrapper[4786]: E0127 13:08:44.464640 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:44 crc kubenswrapper[4786]: E0127 13:08:44.464770 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:44 crc kubenswrapper[4786]: I0127 13:08:44.464509 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:44 crc kubenswrapper[4786]: E0127 13:08:44.465437 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:45 crc kubenswrapper[4786]: I0127 13:08:45.464423 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:45 crc kubenswrapper[4786]: E0127 13:08:45.464552 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:46 crc kubenswrapper[4786]: I0127 13:08:46.464531 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:46 crc kubenswrapper[4786]: I0127 13:08:46.464540 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:46 crc kubenswrapper[4786]: E0127 13:08:46.464832 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:46 crc kubenswrapper[4786]: E0127 13:08:46.465057 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:46 crc kubenswrapper[4786]: I0127 13:08:46.465216 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:46 crc kubenswrapper[4786]: E0127 13:08:46.465312 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:47 crc kubenswrapper[4786]: I0127 13:08:47.468169 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:47 crc kubenswrapper[4786]: E0127 13:08:47.470137 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:48 crc kubenswrapper[4786]: I0127 13:08:48.464679 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:48 crc kubenswrapper[4786]: I0127 13:08:48.464730 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:48 crc kubenswrapper[4786]: E0127 13:08:48.465321 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:48 crc kubenswrapper[4786]: I0127 13:08:48.464747 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:48 crc kubenswrapper[4786]: E0127 13:08:48.465523 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:48 crc kubenswrapper[4786]: E0127 13:08:48.465401 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:49 crc kubenswrapper[4786]: I0127 13:08:49.464843 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:49 crc kubenswrapper[4786]: E0127 13:08:49.464997 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:50 crc kubenswrapper[4786]: I0127 13:08:50.464918 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:50 crc kubenswrapper[4786]: I0127 13:08:50.464950 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:50 crc kubenswrapper[4786]: E0127 13:08:50.465050 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:50 crc kubenswrapper[4786]: I0127 13:08:50.464917 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:50 crc kubenswrapper[4786]: I0127 13:08:50.465785 4786 scope.go:117] "RemoveContainer" containerID="1a2a9faaefef939acbc751ed484c251a13489af87b8f9f705fa46c6dbfb003e5" Jan 27 13:08:50 crc kubenswrapper[4786]: E0127 13:08:50.465827 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:50 crc kubenswrapper[4786]: E0127 13:08:50.465935 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6d56q_openshift-ovn-kubernetes(ad21a31d-efbf-4c10-b3d1-0f6cf71793bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" Jan 27 13:08:50 crc kubenswrapper[4786]: E0127 13:08:50.465940 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:51 crc kubenswrapper[4786]: I0127 13:08:51.464754 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:51 crc kubenswrapper[4786]: E0127 13:08:51.464900 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:52 crc kubenswrapper[4786]: I0127 13:08:52.464681 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:52 crc kubenswrapper[4786]: I0127 13:08:52.464733 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:52 crc kubenswrapper[4786]: I0127 13:08:52.464838 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:52 crc kubenswrapper[4786]: E0127 13:08:52.464961 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:52 crc kubenswrapper[4786]: E0127 13:08:52.464867 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:52 crc kubenswrapper[4786]: E0127 13:08:52.465164 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:53 crc kubenswrapper[4786]: I0127 13:08:53.050088 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9q6dk_a290f38c-b94c-4233-9d98-9a54a728cedb/kube-multus/1.log" Jan 27 13:08:53 crc kubenswrapper[4786]: I0127 13:08:53.050863 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9q6dk_a290f38c-b94c-4233-9d98-9a54a728cedb/kube-multus/0.log" Jan 27 13:08:53 crc kubenswrapper[4786]: I0127 13:08:53.051047 4786 generic.go:334] "Generic (PLEG): container finished" podID="a290f38c-b94c-4233-9d98-9a54a728cedb" containerID="e07ac0a78f8e6cfc9b374b179d250b45f9a86c225a033ed2c706b709a3007b7f" exitCode=1 Jan 27 13:08:53 crc kubenswrapper[4786]: I0127 13:08:53.051171 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9q6dk" event={"ID":"a290f38c-b94c-4233-9d98-9a54a728cedb","Type":"ContainerDied","Data":"e07ac0a78f8e6cfc9b374b179d250b45f9a86c225a033ed2c706b709a3007b7f"} Jan 27 13:08:53 crc kubenswrapper[4786]: I0127 13:08:53.051270 4786 scope.go:117] "RemoveContainer" containerID="8999aa32071dc4f05aef19c7e34333b8788246bb340368b22d15120c38465d60" Jan 27 13:08:53 crc kubenswrapper[4786]: I0127 13:08:53.052148 4786 scope.go:117] "RemoveContainer" containerID="e07ac0a78f8e6cfc9b374b179d250b45f9a86c225a033ed2c706b709a3007b7f" Jan 27 13:08:53 crc kubenswrapper[4786]: E0127 13:08:53.052483 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-9q6dk_openshift-multus(a290f38c-b94c-4233-9d98-9a54a728cedb)\"" pod="openshift-multus/multus-9q6dk" podUID="a290f38c-b94c-4233-9d98-9a54a728cedb" Jan 27 13:08:53 crc kubenswrapper[4786]: I0127 13:08:53.081114 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69cfb" podStartSLOduration=96.081097893 podStartE2EDuration="1m36.081097893s" podCreationTimestamp="2026-01-27 13:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:08:36.00633774 +0000 UTC m=+99.216951869" watchObservedRunningTime="2026-01-27 13:08:53.081097893 +0000 UTC m=+116.291712012" Jan 27 13:08:53 crc kubenswrapper[4786]: I0127 13:08:53.464950 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:53 crc kubenswrapper[4786]: E0127 13:08:53.465245 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:54 crc kubenswrapper[4786]: I0127 13:08:54.054910 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9q6dk_a290f38c-b94c-4233-9d98-9a54a728cedb/kube-multus/1.log" Jan 27 13:08:54 crc kubenswrapper[4786]: I0127 13:08:54.464227 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:54 crc kubenswrapper[4786]: I0127 13:08:54.464341 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:54 crc kubenswrapper[4786]: E0127 13:08:54.464437 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:54 crc kubenswrapper[4786]: E0127 13:08:54.464504 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:54 crc kubenswrapper[4786]: I0127 13:08:54.464341 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:54 crc kubenswrapper[4786]: E0127 13:08:54.464705 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:55 crc kubenswrapper[4786]: I0127 13:08:55.464111 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:55 crc kubenswrapper[4786]: E0127 13:08:55.464292 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:56 crc kubenswrapper[4786]: I0127 13:08:56.464109 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:56 crc kubenswrapper[4786]: I0127 13:08:56.464231 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:56 crc kubenswrapper[4786]: I0127 13:08:56.464334 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:56 crc kubenswrapper[4786]: E0127 13:08:56.464457 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:56 crc kubenswrapper[4786]: E0127 13:08:56.464592 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:56 crc kubenswrapper[4786]: E0127 13:08:56.464906 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:57 crc kubenswrapper[4786]: E0127 13:08:57.416580 4786 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 13:08:57 crc kubenswrapper[4786]: I0127 13:08:57.464820 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:57 crc kubenswrapper[4786]: E0127 13:08:57.466894 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:08:57 crc kubenswrapper[4786]: E0127 13:08:57.571538 4786 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 13:08:58 crc kubenswrapper[4786]: I0127 13:08:58.463999 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:08:58 crc kubenswrapper[4786]: I0127 13:08:58.464117 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:08:58 crc kubenswrapper[4786]: I0127 13:08:58.464252 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:08:58 crc kubenswrapper[4786]: E0127 13:08:58.464298 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:08:58 crc kubenswrapper[4786]: E0127 13:08:58.464468 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:08:58 crc kubenswrapper[4786]: E0127 13:08:58.464744 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:08:59 crc kubenswrapper[4786]: I0127 13:08:59.464469 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:08:59 crc kubenswrapper[4786]: E0127 13:08:59.464784 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:09:00 crc kubenswrapper[4786]: I0127 13:09:00.464458 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:09:00 crc kubenswrapper[4786]: I0127 13:09:00.464500 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:09:00 crc kubenswrapper[4786]: E0127 13:09:00.464713 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:09:00 crc kubenswrapper[4786]: I0127 13:09:00.464742 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:09:00 crc kubenswrapper[4786]: E0127 13:09:00.464883 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:09:00 crc kubenswrapper[4786]: E0127 13:09:00.465087 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:09:01 crc kubenswrapper[4786]: I0127 13:09:01.464419 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:09:01 crc kubenswrapper[4786]: E0127 13:09:01.464588 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:09:02 crc kubenswrapper[4786]: I0127 13:09:02.465054 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:09:02 crc kubenswrapper[4786]: I0127 13:09:02.465103 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:09:02 crc kubenswrapper[4786]: E0127 13:09:02.465212 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:09:02 crc kubenswrapper[4786]: I0127 13:09:02.465255 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:09:02 crc kubenswrapper[4786]: E0127 13:09:02.465463 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:09:02 crc kubenswrapper[4786]: E0127 13:09:02.465574 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:09:02 crc kubenswrapper[4786]: I0127 13:09:02.466351 4786 scope.go:117] "RemoveContainer" containerID="1a2a9faaefef939acbc751ed484c251a13489af87b8f9f705fa46c6dbfb003e5" Jan 27 13:09:02 crc kubenswrapper[4786]: E0127 13:09:02.572841 4786 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 13:09:03 crc kubenswrapper[4786]: I0127 13:09:03.092514 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovnkube-controller/3.log" Jan 27 13:09:03 crc kubenswrapper[4786]: I0127 13:09:03.094976 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerStarted","Data":"e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e"} Jan 27 13:09:03 crc kubenswrapper[4786]: I0127 13:09:03.095387 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:09:03 crc kubenswrapper[4786]: I0127 13:09:03.464952 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:09:03 crc kubenswrapper[4786]: E0127 13:09:03.465799 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:09:03 crc kubenswrapper[4786]: I0127 13:09:03.541022 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podStartSLOduration=105.541002422 podStartE2EDuration="1m45.541002422s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:03.123969874 +0000 UTC m=+126.334583993" watchObservedRunningTime="2026-01-27 13:09:03.541002422 +0000 UTC m=+126.751616542" Jan 27 13:09:03 crc kubenswrapper[4786]: I0127 13:09:03.541755 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8jf77"] Jan 27 13:09:04 crc kubenswrapper[4786]: I0127 13:09:04.099436 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:09:04 crc kubenswrapper[4786]: E0127 13:09:04.100141 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:09:04 crc kubenswrapper[4786]: I0127 13:09:04.464873 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:09:04 crc kubenswrapper[4786]: I0127 13:09:04.464925 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:09:04 crc kubenswrapper[4786]: E0127 13:09:04.465027 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:09:04 crc kubenswrapper[4786]: I0127 13:09:04.464945 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:09:04 crc kubenswrapper[4786]: E0127 13:09:04.465207 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:09:04 crc kubenswrapper[4786]: E0127 13:09:04.465281 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:09:05 crc kubenswrapper[4786]: I0127 13:09:05.464525 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:09:05 crc kubenswrapper[4786]: E0127 13:09:05.464788 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:09:05 crc kubenswrapper[4786]: I0127 13:09:05.465035 4786 scope.go:117] "RemoveContainer" containerID="e07ac0a78f8e6cfc9b374b179d250b45f9a86c225a033ed2c706b709a3007b7f" Jan 27 13:09:06 crc kubenswrapper[4786]: I0127 13:09:06.111904 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9q6dk_a290f38c-b94c-4233-9d98-9a54a728cedb/kube-multus/1.log" Jan 27 13:09:06 crc kubenswrapper[4786]: I0127 13:09:06.111971 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9q6dk" event={"ID":"a290f38c-b94c-4233-9d98-9a54a728cedb","Type":"ContainerStarted","Data":"22f3b0dc9f3dfb4b927b2423d15f1ec1295972f8ddb685dfc978ddd9f16c2ea4"} Jan 27 13:09:06 crc kubenswrapper[4786]: I0127 13:09:06.464730 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:09:06 crc kubenswrapper[4786]: I0127 13:09:06.464880 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:09:06 crc kubenswrapper[4786]: I0127 13:09:06.464766 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:09:06 crc kubenswrapper[4786]: E0127 13:09:06.465068 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 13:09:06 crc kubenswrapper[4786]: E0127 13:09:06.464937 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 13:09:06 crc kubenswrapper[4786]: E0127 13:09:06.465235 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 13:09:07 crc kubenswrapper[4786]: I0127 13:09:07.464104 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:09:07 crc kubenswrapper[4786]: E0127 13:09:07.464943 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8jf77" podUID="a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496" Jan 27 13:09:08 crc kubenswrapper[4786]: I0127 13:09:08.463990 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:09:08 crc kubenswrapper[4786]: I0127 13:09:08.464034 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:09:08 crc kubenswrapper[4786]: I0127 13:09:08.464013 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:09:08 crc kubenswrapper[4786]: I0127 13:09:08.466553 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 13:09:08 crc kubenswrapper[4786]: I0127 13:09:08.466918 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 13:09:08 crc kubenswrapper[4786]: I0127 13:09:08.469993 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 13:09:08 crc kubenswrapper[4786]: I0127 13:09:08.470081 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 13:09:09 crc kubenswrapper[4786]: I0127 13:09:09.464458 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:09:09 crc kubenswrapper[4786]: I0127 13:09:09.468062 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 13:09:09 crc kubenswrapper[4786]: I0127 13:09:09.468142 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 13:09:10 crc kubenswrapper[4786]: I0127 13:09:10.641997 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.881091 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.916556 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j6ww5"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.917208 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pgkx8"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.917592 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wtdrs"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.917654 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.917702 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.918520 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.922263 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.922642 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.924064 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.924880 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.925565 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.926851 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.926918 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.926934 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.926858 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.931201 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.932889 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.933045 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.933691 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.933872 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 13:09:15 crc kubenswrapper[4786]: W0127 13:09:15.934055 4786 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 27 13:09:15 crc kubenswrapper[4786]: E0127 13:09:15.934097 4786 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 13:09:15 crc kubenswrapper[4786]: W0127 13:09:15.934235 4786 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 27 13:09:15 crc kubenswrapper[4786]: E0127 13:09:15.934263 4786 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 13:09:15 crc kubenswrapper[4786]: W0127 13:09:15.934351 4786 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 27 13:09:15 crc kubenswrapper[4786]: E0127 13:09:15.934368 4786 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.934446 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.934625 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 13:09:15 crc kubenswrapper[4786]: W0127 13:09:15.934756 4786 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 27 13:09:15 crc kubenswrapper[4786]: E0127 13:09:15.934787 4786 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 13:09:15 crc kubenswrapper[4786]: W0127 13:09:15.934839 4786 reflector.go:561] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 27 13:09:15 crc kubenswrapper[4786]: E0127 13:09:15.934855 4786 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 13:09:15 crc kubenswrapper[4786]: W0127 13:09:15.934902 4786 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 27 13:09:15 crc kubenswrapper[4786]: E0127 13:09:15.934918 4786 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.936844 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.937217 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.937373 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.937579 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.937578 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.937729 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.937954 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.938072 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.938424 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qzpj9"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.938930 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wv4hn"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.939134 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 13:09:15 crc kubenswrapper[4786]: W0127 13:09:15.939058 4786 reflector.go:561] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": failed to list *v1.Secret: secrets "oauth-apiserver-sa-dockercfg-6r2bq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.939305 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 13:09:15 crc kubenswrapper[4786]: E0127 13:09:15.939298 4786 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-6r2bq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"oauth-apiserver-sa-dockercfg-6r2bq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.939475 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qzpj9" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.939482 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.939501 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.940802 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.945539 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.946368 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nsc4d"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.946710 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.947155 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.947651 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.950951 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.951292 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vszvp"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.951685 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vszvp" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.951721 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-vwjp5"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.951918 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.958269 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.959882 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-8nm76"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.960034 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.962408 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qrqjl"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.962792 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ddxlt"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.963210 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.963498 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.964060 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.964221 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ddxlt" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.964848 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8nm76" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.966969 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.967436 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.988458 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5"] Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.989102 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5" Jan 27 13:09:15 crc kubenswrapper[4786]: I0127 13:09:15.999103 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.023473 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lr6n6"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.024010 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.058660 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.059533 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.059576 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.059680 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.059880 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.060227 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.060316 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.060374 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.060476 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.060672 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.060740 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.060778 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.061022 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.061109 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.061121 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.061237 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.061459 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.061678 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.061697 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wtdrs"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.061063 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.062223 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.062989 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.064798 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.065005 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.065495 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.065681 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.066129 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.066580 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.067254 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.067810 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.070053 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.070703 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.071662 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.080898 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.081093 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.081166 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.081401 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.081507 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.081680 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.081818 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.081926 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.082038 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.082042 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.082275 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.082435 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.082902 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.082481 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.083107 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.082495 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.082688 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.083950 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.084001 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.084105 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.084267 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.084403 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.084431 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.084528 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.084579 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.084701 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.084916 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.084936 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.085075 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.085338 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.083325 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.085631 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.085625 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-whlx4"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.086552 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.086591 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.088426 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.090549 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.091958 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vrj6k"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.092115 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.092481 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.092505 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.092800 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.093163 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.093716 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrj6k" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.094045 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.104660 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.104882 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.112664 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.116848 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.117332 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.117775 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-knblz"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.117921 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.118420 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.118510 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-knblz" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.119048 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.119195 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.119547 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.119764 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.119772 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.120542 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.120659 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.121235 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.122694 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.123017 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.123685 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.123889 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.124371 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.126000 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wv4hn"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.128119 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.129843 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.143374 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.144083 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.146671 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.146704 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j6ww5"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.146719 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pgkx8"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.149984 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.150036 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47c2z"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.152090 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.152188 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47c2z" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.153926 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.154988 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vwjp5"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.159264 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.159816 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ndd4n"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.160991 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.161928 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8vdpv"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.162724 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8vdpv" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.164298 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.167134 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qzpj9"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.167443 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.170047 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ddxlt"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.170078 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.177834 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.179162 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qrqjl"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.179316 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.180508 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vszvp"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.185371 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8nm76"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.186868 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.189632 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-shs56"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.190167 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-shs56" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.192596 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.194262 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.197186 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-knblz"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.199468 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.199479 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.200115 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.201546 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.202834 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.204452 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lr6n6"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.206236 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.207880 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qvpfk"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.209237 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nsc4d"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.209335 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.211351 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.215308 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.216925 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.219100 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.219289 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.220832 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.222376 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.223532 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vrj6k"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.224615 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ndd4n"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.225951 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.226695 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47c2z"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.227796 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8vdpv"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.229251 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qvpfk"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.230552 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pq9ft"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.231270 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pq9ft" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.231981 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-ck8s7"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.232874 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ck8s7" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.233200 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pq9ft"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.234273 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ck8s7"] Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.245811 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.279726 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.300033 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.319739 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.339586 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.360105 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.379446 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.399387 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.420389 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.439452 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.459919 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.479970 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.499993 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.520271 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.540484 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.559697 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.579687 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.619169 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.639161 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.659657 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.679422 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.699258 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.721207 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.739281 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.760803 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.780740 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.800253 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.820158 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.839693 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.860870 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.880892 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.900463 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.920304 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.940170 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.959563 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 13:09:16 crc kubenswrapper[4786]: I0127 13:09:16.980498 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.000269 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.021355 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.040200 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.060441 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.080448 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.100773 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.118715 4786 request.go:700] Waited for 1.00034998s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.120313 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.139901 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.159373 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.180114 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.199799 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.219863 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.239876 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.260788 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.279270 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.299467 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.320116 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.339717 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.359778 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.379796 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.399438 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.420818 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.439997 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.469314 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.480065 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.499694 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.520448 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.540282 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.560141 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.579687 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.599120 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.620984 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.640700 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.659421 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.681324 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.700154 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.720578 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.738975 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.760135 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.779836 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.799807 4786 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.820498 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.840400 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.859731 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.880970 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.900347 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.920251 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.940450 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 13:09:17 crc kubenswrapper[4786]: I0127 13:09:17.959353 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.020183 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.036997 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f1bfd5a9-8223-4fb1-b3e3-1acab59d886e-metrics-tls\") pod \"ingress-operator-5b745b69d9-qmkrf\" (UID: \"f1bfd5a9-8223-4fb1-b3e3-1acab59d886e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037040 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6cba5a02-cf5f-4aaa-9c5c-c5979789f000-machine-approver-tls\") pod \"machine-approver-56656f9798-k47sw\" (UID: \"6cba5a02-cf5f-4aaa-9c5c-c5979789f000\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037062 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4zkf\" (UniqueName: \"kubernetes.io/projected/b015794a-bfb0-4118-8dae-8861a7ff6a03-kube-api-access-v4zkf\") pod \"machine-api-operator-5694c8668f-pgkx8\" (UID: \"b015794a-bfb0-4118-8dae-8861a7ff6a03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037080 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6917b6e-005e-44cf-92c4-6fc271f5ce49-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037096 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54c0b843-e588-4296-a0ab-8272fa1b23e5-config\") pod \"authentication-operator-69f744f599-wv4hn\" (UID: \"54c0b843-e588-4296-a0ab-8272fa1b23e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037113 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-registry-tls\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037129 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6adde762-4e97-44eb-a96c-14a79ec7998a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037144 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037172 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5922ee53-d413-4676-ab1e-21f570893009-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc82m\" (UID: \"5922ee53-d413-4676-ab1e-21f570893009\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037216 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d3f971e8-68fd-40a7-902a-ba8c6110f14d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7l2ns\" (UID: \"d3f971e8-68fd-40a7-902a-ba8c6110f14d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037295 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6917b6e-005e-44cf-92c4-6fc271f5ce49-serving-cert\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037349 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p2p8\" (UniqueName: \"kubernetes.io/projected/54c0b843-e588-4296-a0ab-8272fa1b23e5-kube-api-access-4p2p8\") pod \"authentication-operator-69f744f599-wv4hn\" (UID: \"54c0b843-e588-4296-a0ab-8272fa1b23e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037384 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvmwm\" (UniqueName: \"kubernetes.io/projected/8156e329-ca23-4079-8b23-ba0c32cc89a9-kube-api-access-hvmwm\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037410 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7275decb-852d-401b-81b7-affb84126aad-trusted-ca\") pod \"console-operator-58897d9998-qzpj9\" (UID: \"7275decb-852d-401b-81b7-affb84126aad\") " pod="openshift-console-operator/console-operator-58897d9998-qzpj9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037433 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ef0407d-860c-4e18-8dea-9373887d7b88-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037454 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbcrj\" (UniqueName: \"kubernetes.io/projected/cbf5f627-0aa5-4a32-840c-f76373e2150e-kube-api-access-pbcrj\") pod \"downloads-7954f5f757-8nm76\" (UID: \"cbf5f627-0aa5-4a32-840c-f76373e2150e\") " pod="openshift-console/downloads-7954f5f757-8nm76" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037544 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7275decb-852d-401b-81b7-affb84126aad-serving-cert\") pod \"console-operator-58897d9998-qzpj9\" (UID: \"7275decb-852d-401b-81b7-affb84126aad\") " pod="openshift-console-operator/console-operator-58897d9998-qzpj9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037580 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037623 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6adde762-4e97-44eb-a96c-14a79ec7998a-registry-certificates\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037691 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6917b6e-005e-44cf-92c4-6fc271f5ce49-audit-dir\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037900 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037919 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjfrl\" (UniqueName: \"kubernetes.io/projected/47f5a0b2-7757-4795-901e-d175d64ebe67-kube-api-access-zjfrl\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037936 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3ef0407d-860c-4e18-8dea-9373887d7b88-etcd-serving-ca\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.037985 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5922ee53-d413-4676-ab1e-21f570893009-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc82m\" (UID: \"5922ee53-d413-4676-ab1e-21f570893009\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038009 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzjfm\" (UniqueName: \"kubernetes.io/projected/3ef0407d-860c-4e18-8dea-9373887d7b88-kube-api-access-bzjfm\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038030 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54c0b843-e588-4296-a0ab-8272fa1b23e5-serving-cert\") pod \"authentication-operator-69f744f599-wv4hn\" (UID: \"54c0b843-e588-4296-a0ab-8272fa1b23e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038047 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8156e329-ca23-4079-8b23-ba0c32cc89a9-audit-dir\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038062 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9vft\" (UniqueName: \"kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-kube-api-access-w9vft\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038086 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8gmql\" (UID: \"3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038101 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3ef0407d-860c-4e18-8dea-9373887d7b88-audit\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038119 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znbz6\" (UniqueName: \"kubernetes.io/projected/bb619dcb-2b7a-413f-b136-48e4eec0eb9f-kube-api-access-znbz6\") pod \"cluster-samples-operator-665b6dd947-vszvp\" (UID: \"bb619dcb-2b7a-413f-b136-48e4eec0eb9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vszvp" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038138 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/47f5a0b2-7757-4795-901e-d175d64ebe67-console-serving-cert\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038161 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038181 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038197 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-client-ca\") pod \"controller-manager-879f6c89f-wtdrs\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038219 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038245 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef0407d-860c-4e18-8dea-9373887d7b88-serving-cert\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038293 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-trusted-ca-bundle\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038340 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7275decb-852d-401b-81b7-affb84126aad-config\") pod \"console-operator-58897d9998-qzpj9\" (UID: \"7275decb-852d-401b-81b7-affb84126aad\") " pod="openshift-console-operator/console-operator-58897d9998-qzpj9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038370 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a6917b6e-005e-44cf-92c4-6fc271f5ce49-audit-policies\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038392 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb619dcb-2b7a-413f-b136-48e4eec0eb9f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vszvp\" (UID: \"bb619dcb-2b7a-413f-b136-48e4eec0eb9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vszvp" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038437 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8gmql\" (UID: \"3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038470 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef0407d-860c-4e18-8dea-9373887d7b88-config\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038500 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6cba5a02-cf5f-4aaa-9c5c-c5979789f000-auth-proxy-config\") pod \"machine-approver-56656f9798-k47sw\" (UID: \"6cba5a02-cf5f-4aaa-9c5c-c5979789f000\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038530 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6adde762-4e97-44eb-a96c-14a79ec7998a-trusted-ca\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038547 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx5sw\" (UniqueName: \"kubernetes.io/projected/5922ee53-d413-4676-ab1e-21f570893009-kube-api-access-kx5sw\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc82m\" (UID: \"5922ee53-d413-4676-ab1e-21f570893009\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038563 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63594f44-fa91-43fe-b1da-d1df4f593e45-serving-cert\") pod \"controller-manager-879f6c89f-wtdrs\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038576 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a6917b6e-005e-44cf-92c4-6fc271f5ce49-etcd-client\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038589 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038640 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfh6f\" (UniqueName: \"kubernetes.io/projected/6cba5a02-cf5f-4aaa-9c5c-c5979789f000-kube-api-access-xfh6f\") pod \"machine-approver-56656f9798-k47sw\" (UID: \"6cba5a02-cf5f-4aaa-9c5c-c5979789f000\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038700 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfjfz\" (UniqueName: \"kubernetes.io/projected/3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2-kube-api-access-sfjfz\") pod \"openshift-apiserver-operator-796bbdcf4f-8gmql\" (UID: \"3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038719 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1bfd5a9-8223-4fb1-b3e3-1acab59d886e-trusted-ca\") pod \"ingress-operator-5b745b69d9-qmkrf\" (UID: \"f1bfd5a9-8223-4fb1-b3e3-1acab59d886e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038734 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wtdrs\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038750 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a6917b6e-005e-44cf-92c4-6fc271f5ce49-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038768 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-serving-cert\") pod \"route-controller-manager-6576b87f9c-72vpf\" (UID: \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038784 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3ef0407d-860c-4e18-8dea-9373887d7b88-image-import-ca\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038800 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ef0407d-860c-4e18-8dea-9373887d7b88-audit-dir\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038821 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfgsl\" (UniqueName: \"kubernetes.io/projected/7275decb-852d-401b-81b7-affb84126aad-kube-api-access-zfgsl\") pod \"console-operator-58897d9998-qzpj9\" (UID: \"7275decb-852d-401b-81b7-affb84126aad\") " pod="openshift-console-operator/console-operator-58897d9998-qzpj9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038844 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/47f5a0b2-7757-4795-901e-d175d64ebe67-console-oauth-config\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038868 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-bound-sa-token\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038882 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-audit-policies\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038903 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trlrz\" (UniqueName: \"kubernetes.io/projected/caeaacdd-b085-4196-8fcc-f10ba2b593f7-kube-api-access-trlrz\") pod \"dns-operator-744455d44c-ddxlt\" (UID: \"caeaacdd-b085-4196-8fcc-f10ba2b593f7\") " pod="openshift-dns-operator/dns-operator-744455d44c-ddxlt" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038923 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-console-config\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.038993 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6adde762-4e97-44eb-a96c-14a79ec7998a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039025 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-config\") pod \"route-controller-manager-6576b87f9c-72vpf\" (UID: \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039063 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/caeaacdd-b085-4196-8fcc-f10ba2b593f7-metrics-tls\") pod \"dns-operator-744455d44c-ddxlt\" (UID: \"caeaacdd-b085-4196-8fcc-f10ba2b593f7\") " pod="openshift-dns-operator/dns-operator-744455d44c-ddxlt" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039086 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3ef0407d-860c-4e18-8dea-9373887d7b88-etcd-client\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039110 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-config\") pod \"controller-manager-879f6c89f-wtdrs\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039160 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f971e8-68fd-40a7-902a-ba8c6110f14d-serving-cert\") pod \"openshift-config-operator-7777fb866f-7l2ns\" (UID: \"d3f971e8-68fd-40a7-902a-ba8c6110f14d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039188 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a6917b6e-005e-44cf-92c4-6fc271f5ce49-encryption-config\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039223 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039261 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039296 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-service-ca\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039352 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-client-ca\") pod \"route-controller-manager-6576b87f9c-72vpf\" (UID: \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039383 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd9c7\" (UniqueName: \"kubernetes.io/projected/f1bfd5a9-8223-4fb1-b3e3-1acab59d886e-kube-api-access-nd9c7\") pod \"ingress-operator-5b745b69d9-qmkrf\" (UID: \"f1bfd5a9-8223-4fb1-b3e3-1acab59d886e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039402 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt2km\" (UniqueName: \"kubernetes.io/projected/d3f971e8-68fd-40a7-902a-ba8c6110f14d-kube-api-access-nt2km\") pod \"openshift-config-operator-7777fb866f-7l2ns\" (UID: \"d3f971e8-68fd-40a7-902a-ba8c6110f14d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039445 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm24p\" (UniqueName: \"kubernetes.io/projected/a6917b6e-005e-44cf-92c4-6fc271f5ce49-kube-api-access-gm24p\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039477 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54c0b843-e588-4296-a0ab-8272fa1b23e5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wv4hn\" (UID: \"54c0b843-e588-4296-a0ab-8272fa1b23e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039488 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039515 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039557 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1bfd5a9-8223-4fb1-b3e3-1acab59d886e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qmkrf\" (UID: \"f1bfd5a9-8223-4fb1-b3e3-1acab59d886e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039595 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b015794a-bfb0-4118-8dae-8861a7ff6a03-config\") pod \"machine-api-operator-5694c8668f-pgkx8\" (UID: \"b015794a-bfb0-4118-8dae-8861a7ff6a03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039687 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54c0b843-e588-4296-a0ab-8272fa1b23e5-service-ca-bundle\") pod \"authentication-operator-69f744f599-wv4hn\" (UID: \"54c0b843-e588-4296-a0ab-8272fa1b23e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039756 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039792 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b015794a-bfb0-4118-8dae-8861a7ff6a03-images\") pod \"machine-api-operator-5694c8668f-pgkx8\" (UID: \"b015794a-bfb0-4118-8dae-8861a7ff6a03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039843 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b015794a-bfb0-4118-8dae-8861a7ff6a03-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pgkx8\" (UID: \"b015794a-bfb0-4118-8dae-8861a7ff6a03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039863 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3ef0407d-860c-4e18-8dea-9373887d7b88-encryption-config\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039882 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65s5z\" (UniqueName: \"kubernetes.io/projected/63594f44-fa91-43fe-b1da-d1df4f593e45-kube-api-access-65s5z\") pod \"controller-manager-879f6c89f-wtdrs\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.039947 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5jpn\" (UniqueName: \"kubernetes.io/projected/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-kube-api-access-l5jpn\") pod \"route-controller-manager-6576b87f9c-72vpf\" (UID: \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.040005 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cba5a02-cf5f-4aaa-9c5c-c5979789f000-config\") pod \"machine-approver-56656f9798-k47sw\" (UID: \"6cba5a02-cf5f-4aaa-9c5c-c5979789f000\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.040026 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-oauth-serving-cert\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.040057 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.040076 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3ef0407d-860c-4e18-8dea-9373887d7b88-node-pullsecrets\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: E0127 13:09:18.040463 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:18.540443031 +0000 UTC m=+141.751057230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.059358 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.079864 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.099959 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.120645 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.140423 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.140708 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:18 crc kubenswrapper[4786]: E0127 13:09:18.140856 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:18.640834833 +0000 UTC m=+141.851448962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.140905 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3ad64404-53a2-45aa-8eba-292f14135379-images\") pod \"machine-config-operator-74547568cd-6xjzj\" (UID: \"3ad64404-53a2-45aa-8eba-292f14135379\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.140937 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgkj8\" (UniqueName: \"kubernetes.io/projected/9b6d90be-b987-4cac-8e4c-7c1afa1abffe-kube-api-access-sgkj8\") pod \"service-ca-9c57cc56f-8vdpv\" (UID: \"9b6d90be-b987-4cac-8e4c-7c1afa1abffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-8vdpv" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.140959 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b44d5db3-8518-4039-9d50-6d9d991e78bc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h2lh7\" (UID: \"b44d5db3-8518-4039-9d50-6d9d991e78bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.140991 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6adde762-4e97-44eb-a96c-14a79ec7998a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141027 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/caeaacdd-b085-4196-8fcc-f10ba2b593f7-metrics-tls\") pod \"dns-operator-744455d44c-ddxlt\" (UID: \"caeaacdd-b085-4196-8fcc-f10ba2b593f7\") " pod="openshift-dns-operator/dns-operator-744455d44c-ddxlt" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141070 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-config\") pod \"controller-manager-879f6c89f-wtdrs\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141096 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk4bz\" (UniqueName: \"kubernetes.io/projected/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-kube-api-access-rk4bz\") pod \"marketplace-operator-79b997595-ndd4n\" (UID: \"294ae02f-293d-4db8-9a9f-6d3878c8ccf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141128 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtgdd\" (UniqueName: \"kubernetes.io/projected/3cf531c6-d1a1-4f65-af72-093ffdb034c1-kube-api-access-qtgdd\") pod \"collect-profiles-29491980-6pf7c\" (UID: \"3cf531c6-d1a1-4f65-af72-093ffdb034c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141158 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f971e8-68fd-40a7-902a-ba8c6110f14d-serving-cert\") pod \"openshift-config-operator-7777fb866f-7l2ns\" (UID: \"d3f971e8-68fd-40a7-902a-ba8c6110f14d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141189 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141221 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0cb545f-a56f-4948-b56d-956edd2501db-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2j2wh\" (UID: \"a0cb545f-a56f-4948-b56d-956edd2501db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141254 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsqvc\" (UniqueName: \"kubernetes.io/projected/c35b6e44-42c4-4322-a9cd-14a087926529-kube-api-access-tsqvc\") pod \"service-ca-operator-777779d784-mtzzc\" (UID: \"c35b6e44-42c4-4322-a9cd-14a087926529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141284 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl77z\" (UniqueName: \"kubernetes.io/projected/486083ab-c15d-43da-9eb1-c9736ed02e4e-kube-api-access-hl77z\") pod \"ingress-canary-pq9ft\" (UID: \"486083ab-c15d-43da-9eb1-c9736ed02e4e\") " pod="openshift-ingress-canary/ingress-canary-pq9ft" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141312 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9b6d90be-b987-4cac-8e4c-7c1afa1abffe-signing-cabundle\") pod \"service-ca-9c57cc56f-8vdpv\" (UID: \"9b6d90be-b987-4cac-8e4c-7c1afa1abffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-8vdpv" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141343 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141374 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pclzk\" (UniqueName: \"kubernetes.io/projected/e183bd5d-f0d0-4254-82d5-240578ae6d1a-kube-api-access-pclzk\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141403 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fa7fb69c-753a-481b-891d-661a2b6b37fd-etcd-ca\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141428 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fafe1848-6215-4056-92a7-d8032eca6e26-certs\") pod \"machine-config-server-shs56\" (UID: \"fafe1848-6215-4056-92a7-d8032eca6e26\") " pod="openshift-machine-config-operator/machine-config-server-shs56" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141455 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbb6v\" (UniqueName: \"kubernetes.io/projected/4ac1751a-b5c6-46c7-a771-200f41805eea-kube-api-access-zbb6v\") pod \"control-plane-machine-set-operator-78cbb6b69f-47c2z\" (UID: \"4ac1751a-b5c6-46c7-a771-200f41805eea\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47c2z" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141491 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm24p\" (UniqueName: \"kubernetes.io/projected/a6917b6e-005e-44cf-92c4-6fc271f5ce49-kube-api-access-gm24p\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141523 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/33df9034-8bca-4bf5-ac81-73b8b14ca319-metrics-tls\") pod \"dns-default-ck8s7\" (UID: \"33df9034-8bca-4bf5-ac81-73b8b14ca319\") " pod="openshift-dns/dns-default-ck8s7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141555 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b015794a-bfb0-4118-8dae-8861a7ff6a03-config\") pod \"machine-api-operator-5694c8668f-pgkx8\" (UID: \"b015794a-bfb0-4118-8dae-8861a7ff6a03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141587 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e183bd5d-f0d0-4254-82d5-240578ae6d1a-plugins-dir\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141648 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b015794a-bfb0-4118-8dae-8861a7ff6a03-images\") pod \"machine-api-operator-5694c8668f-pgkx8\" (UID: \"b015794a-bfb0-4118-8dae-8861a7ff6a03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141680 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b015794a-bfb0-4118-8dae-8861a7ff6a03-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pgkx8\" (UID: \"b015794a-bfb0-4118-8dae-8861a7ff6a03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141744 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3ef0407d-860c-4e18-8dea-9373887d7b88-encryption-config\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141779 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65s5z\" (UniqueName: \"kubernetes.io/projected/63594f44-fa91-43fe-b1da-d1df4f593e45-kube-api-access-65s5z\") pod \"controller-manager-879f6c89f-wtdrs\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141819 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp65l\" (UniqueName: \"kubernetes.io/projected/021c92f0-97e6-41aa-9ff3-b14e81d3431a-kube-api-access-sp65l\") pod \"machine-config-controller-84d6567774-wllp5\" (UID: \"021c92f0-97e6-41aa-9ff3-b14e81d3431a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141852 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cba5a02-cf5f-4aaa-9c5c-c5979789f000-config\") pod \"machine-approver-56656f9798-k47sw\" (UID: \"6cba5a02-cf5f-4aaa-9c5c-c5979789f000\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141884 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3ef0407d-860c-4e18-8dea-9373887d7b88-node-pullsecrets\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141911 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f90e2034-b734-41d4-b7e2-856605c95aff-srv-cert\") pod \"catalog-operator-68c6474976-lw2xw\" (UID: \"f90e2034-b734-41d4-b7e2-856605c95aff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141938 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e183bd5d-f0d0-4254-82d5-240578ae6d1a-csi-data-dir\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.141977 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54c0b843-e588-4296-a0ab-8272fa1b23e5-config\") pod \"authentication-operator-69f744f599-wv4hn\" (UID: \"54c0b843-e588-4296-a0ab-8272fa1b23e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142007 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/952ee042-cdc6-44b7-8aa0-b24f7e1e1027-config\") pod \"kube-apiserver-operator-766d6c64bb-4c8h9\" (UID: \"952ee042-cdc6-44b7-8aa0-b24f7e1e1027\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142037 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa7fb69c-753a-481b-891d-661a2b6b37fd-serving-cert\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142069 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-registry-tls\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142098 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4zkf\" (UniqueName: \"kubernetes.io/projected/b015794a-bfb0-4118-8dae-8861a7ff6a03-kube-api-access-v4zkf\") pod \"machine-api-operator-5694c8668f-pgkx8\" (UID: \"b015794a-bfb0-4118-8dae-8861a7ff6a03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142126 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6adde762-4e97-44eb-a96c-14a79ec7998a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142153 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142186 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/021c92f0-97e6-41aa-9ff3-b14e81d3431a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wllp5\" (UID: \"021c92f0-97e6-41aa-9ff3-b14e81d3431a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142216 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35b6e44-42c4-4322-a9cd-14a087926529-config\") pod \"service-ca-operator-777779d784-mtzzc\" (UID: \"c35b6e44-42c4-4322-a9cd-14a087926529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142246 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5922ee53-d413-4676-ab1e-21f570893009-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc82m\" (UID: \"5922ee53-d413-4676-ab1e-21f570893009\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142275 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6917b6e-005e-44cf-92c4-6fc271f5ce49-serving-cert\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142309 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tdr7\" (UniqueName: \"kubernetes.io/projected/fa7fb69c-753a-481b-891d-661a2b6b37fd-kube-api-access-9tdr7\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142341 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbcrj\" (UniqueName: \"kubernetes.io/projected/cbf5f627-0aa5-4a32-840c-f76373e2150e-kube-api-access-pbcrj\") pod \"downloads-7954f5f757-8nm76\" (UID: \"cbf5f627-0aa5-4a32-840c-f76373e2150e\") " pod="openshift-console/downloads-7954f5f757-8nm76" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142384 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9472f740-2de2-4b00-b109-0b17604c4c9e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2wgf7\" (UID: \"9472f740-2de2-4b00-b109-0b17604c4c9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142413 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkcwv\" (UniqueName: \"kubernetes.io/projected/fafe1848-6215-4056-92a7-d8032eca6e26-kube-api-access-zkcwv\") pod \"machine-config-server-shs56\" (UID: \"fafe1848-6215-4056-92a7-d8032eca6e26\") " pod="openshift-machine-config-operator/machine-config-server-shs56" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142449 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6adde762-4e97-44eb-a96c-14a79ec7998a-registry-certificates\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142641 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b015794a-bfb0-4118-8dae-8861a7ff6a03-config\") pod \"machine-api-operator-5694c8668f-pgkx8\" (UID: \"b015794a-bfb0-4118-8dae-8861a7ff6a03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142718 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6917b6e-005e-44cf-92c4-6fc271f5ce49-audit-dir\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142752 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142783 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjfrl\" (UniqueName: \"kubernetes.io/projected/47f5a0b2-7757-4795-901e-d175d64ebe67-kube-api-access-zjfrl\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142815 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fa7fb69c-753a-481b-891d-661a2b6b37fd-etcd-client\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142844 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5922ee53-d413-4676-ab1e-21f570893009-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc82m\" (UID: \"5922ee53-d413-4676-ab1e-21f570893009\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142868 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08-service-ca-bundle\") pod \"router-default-5444994796-whlx4\" (UID: \"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08\") " pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142897 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bdde3a3a-6d8a-45b8-b718-378bca43559e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhfq\" (UID: \"bdde3a3a-6d8a-45b8-b718-378bca43559e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142925 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54c0b843-e588-4296-a0ab-8272fa1b23e5-serving-cert\") pod \"authentication-operator-69f744f599-wv4hn\" (UID: \"54c0b843-e588-4296-a0ab-8272fa1b23e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142948 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8156e329-ca23-4079-8b23-ba0c32cc89a9-audit-dir\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142972 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8gmql\" (UID: \"3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.142995 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3ef0407d-860c-4e18-8dea-9373887d7b88-audit\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.143020 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.143044 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/021c92f0-97e6-41aa-9ff3-b14e81d3431a-proxy-tls\") pod \"machine-config-controller-84d6567774-wllp5\" (UID: \"021c92f0-97e6-41aa-9ff3-b14e81d3431a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.143066 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08-metrics-certs\") pod \"router-default-5444994796-whlx4\" (UID: \"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08\") " pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.143110 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3ef0407d-860c-4e18-8dea-9373887d7b88-node-pullsecrets\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.143161 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-config\") pod \"controller-manager-879f6c89f-wtdrs\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.143309 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b015794a-bfb0-4118-8dae-8861a7ff6a03-images\") pod \"machine-api-operator-5694c8668f-pgkx8\" (UID: \"b015794a-bfb0-4118-8dae-8861a7ff6a03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.143460 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.143667 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cba5a02-cf5f-4aaa-9c5c-c5979789f000-config\") pod \"machine-approver-56656f9798-k47sw\" (UID: \"6cba5a02-cf5f-4aaa-9c5c-c5979789f000\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.143714 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6917b6e-005e-44cf-92c4-6fc271f5ce49-audit-dir\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.144010 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54c0b843-e588-4296-a0ab-8272fa1b23e5-config\") pod \"authentication-operator-69f744f599-wv4hn\" (UID: \"54c0b843-e588-4296-a0ab-8272fa1b23e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.144309 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.144325 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8156e329-ca23-4079-8b23-ba0c32cc89a9-audit-dir\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.144382 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ndd4n\" (UID: \"294ae02f-293d-4db8-9a9f-6d3878c8ccf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.144527 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6adde762-4e97-44eb-a96c-14a79ec7998a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.145042 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5922ee53-d413-4676-ab1e-21f570893009-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc82m\" (UID: \"5922ee53-d413-4676-ab1e-21f570893009\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.145174 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3ef0407d-860c-4e18-8dea-9373887d7b88-audit\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.145179 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrvb4\" (UniqueName: \"kubernetes.io/projected/2ec75c03-cdcf-4d6e-97cd-453ea4d288f3-kube-api-access-lrvb4\") pod \"packageserver-d55dfcdfc-5bptf\" (UID: \"2ec75c03-cdcf-4d6e-97cd-453ea4d288f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.145229 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3ad64404-53a2-45aa-8eba-292f14135379-proxy-tls\") pod \"machine-config-operator-74547568cd-6xjzj\" (UID: \"3ad64404-53a2-45aa-8eba-292f14135379\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.145346 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef0407d-860c-4e18-8dea-9373887d7b88-serving-cert\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.145405 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-trusted-ca-bundle\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.145552 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e183bd5d-f0d0-4254-82d5-240578ae6d1a-registration-dir\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.145643 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fafe1848-6215-4056-92a7-d8032eca6e26-node-bootstrap-token\") pod \"machine-config-server-shs56\" (UID: \"fafe1848-6215-4056-92a7-d8032eca6e26\") " pod="openshift-machine-config-operator/machine-config-server-shs56" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.145706 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ndd4n\" (UID: \"294ae02f-293d-4db8-9a9f-6d3878c8ccf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.145793 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33df9034-8bca-4bf5-ac81-73b8b14ca319-config-volume\") pod \"dns-default-ck8s7\" (UID: \"33df9034-8bca-4bf5-ac81-73b8b14ca319\") " pod="openshift-dns/dns-default-ck8s7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.145896 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a6917b6e-005e-44cf-92c4-6fc271f5ce49-audit-policies\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.145960 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7275decb-852d-401b-81b7-affb84126aad-config\") pod \"console-operator-58897d9998-qzpj9\" (UID: \"7275decb-852d-401b-81b7-affb84126aad\") " pod="openshift-console-operator/console-operator-58897d9998-qzpj9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.146013 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef0407d-860c-4e18-8dea-9373887d7b88-config\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.146047 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8gmql\" (UID: \"3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.146499 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-trusted-ca-bundle\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.146537 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.147077 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a6917b6e-005e-44cf-92c4-6fc271f5ce49-audit-policies\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.147692 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7275decb-852d-401b-81b7-affb84126aad-config\") pod \"console-operator-58897d9998-qzpj9\" (UID: \"7275decb-852d-401b-81b7-affb84126aad\") " pod="openshift-console-operator/console-operator-58897d9998-qzpj9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.147745 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l58p4\" (UniqueName: \"kubernetes.io/projected/4872c606-0857-4266-8cd8-a78c8070b5ca-kube-api-access-l58p4\") pod \"multus-admission-controller-857f4d67dd-knblz\" (UID: \"4872c606-0857-4266-8cd8-a78c8070b5ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-knblz" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.147777 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/486083ab-c15d-43da-9eb1-c9736ed02e4e-cert\") pod \"ingress-canary-pq9ft\" (UID: \"486083ab-c15d-43da-9eb1-c9736ed02e4e\") " pod="openshift-ingress-canary/ingress-canary-pq9ft" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.147795 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69af92e-0d80-428e-930c-fbd86b17643c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8n2x\" (UID: \"e69af92e-0d80-428e-930c-fbd86b17643c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.147814 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1bfd5a9-8223-4fb1-b3e3-1acab59d886e-trusted-ca\") pod \"ingress-operator-5b745b69d9-qmkrf\" (UID: \"f1bfd5a9-8223-4fb1-b3e3-1acab59d886e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.147871 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wtdrs\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.147889 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b44d5db3-8518-4039-9d50-6d9d991e78bc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h2lh7\" (UID: \"b44d5db3-8518-4039-9d50-6d9d991e78bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.147907 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ef0407d-860c-4e18-8dea-9373887d7b88-audit-dir\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.147951 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6xvr\" (UniqueName: \"kubernetes.io/projected/a0cb545f-a56f-4948-b56d-956edd2501db-kube-api-access-q6xvr\") pod \"package-server-manager-789f6589d5-2j2wh\" (UID: \"a0cb545f-a56f-4948-b56d-956edd2501db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.147968 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44d5db3-8518-4039-9d50-6d9d991e78bc-config\") pod \"kube-controller-manager-operator-78b949d7b-h2lh7\" (UID: \"b44d5db3-8518-4039-9d50-6d9d991e78bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.147986 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cf531c6-d1a1-4f65-af72-093ffdb034c1-config-volume\") pod \"collect-profiles-29491980-6pf7c\" (UID: \"3cf531c6-d1a1-4f65-af72-093ffdb034c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.148007 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-audit-policies\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.147983 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8gmql\" (UID: \"3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.148033 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef0407d-860c-4e18-8dea-9373887d7b88-config\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.148058 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ef0407d-860c-4e18-8dea-9373887d7b88-audit-dir\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.148111 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2ec75c03-cdcf-4d6e-97cd-453ea4d288f3-tmpfs\") pod \"packageserver-d55dfcdfc-5bptf\" (UID: \"2ec75c03-cdcf-4d6e-97cd-453ea4d288f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.148149 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-bound-sa-token\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.148212 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trlrz\" (UniqueName: \"kubernetes.io/projected/caeaacdd-b085-4196-8fcc-f10ba2b593f7-kube-api-access-trlrz\") pod \"dns-operator-744455d44c-ddxlt\" (UID: \"caeaacdd-b085-4196-8fcc-f10ba2b593f7\") " pod="openshift-dns-operator/dns-operator-744455d44c-ddxlt" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.148241 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-console-config\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.148259 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/caeaacdd-b085-4196-8fcc-f10ba2b593f7-metrics-tls\") pod \"dns-operator-744455d44c-ddxlt\" (UID: \"caeaacdd-b085-4196-8fcc-f10ba2b593f7\") " pod="openshift-dns-operator/dns-operator-744455d44c-ddxlt" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.148338 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6adde762-4e97-44eb-a96c-14a79ec7998a-registry-certificates\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.148483 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-config\") pod \"route-controller-manager-6576b87f9c-72vpf\" (UID: \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.148808 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-audit-policies\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.148848 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wtdrs\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.149116 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1bfd5a9-8223-4fb1-b3e3-1acab59d886e-trusted-ca\") pod \"ingress-operator-5b745b69d9-qmkrf\" (UID: \"f1bfd5a9-8223-4fb1-b3e3-1acab59d886e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.149125 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6917b6e-005e-44cf-92c4-6fc271f5ce49-serving-cert\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.149193 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f971e8-68fd-40a7-902a-ba8c6110f14d-serving-cert\") pod \"openshift-config-operator-7777fb866f-7l2ns\" (UID: \"d3f971e8-68fd-40a7-902a-ba8c6110f14d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.149196 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3ef0407d-860c-4e18-8dea-9373887d7b88-etcd-client\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.149266 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/952ee042-cdc6-44b7-8aa0-b24f7e1e1027-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4c8h9\" (UID: \"952ee042-cdc6-44b7-8aa0-b24f7e1e1027\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.149316 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c35b6e44-42c4-4322-a9cd-14a087926529-serving-cert\") pod \"service-ca-operator-777779d784-mtzzc\" (UID: \"c35b6e44-42c4-4322-a9cd-14a087926529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.149421 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e69af92e-0d80-428e-930c-fbd86b17643c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8n2x\" (UID: \"e69af92e-0d80-428e-930c-fbd86b17643c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.149558 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.149640 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-service-ca\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.149940 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5922ee53-d413-4676-ab1e-21f570893009-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc82m\" (UID: \"5922ee53-d413-4676-ab1e-21f570893009\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150142 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-config\") pod \"route-controller-manager-6576b87f9c-72vpf\" (UID: \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150161 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-console-config\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150193 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6adde762-4e97-44eb-a96c-14a79ec7998a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150276 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a6917b6e-005e-44cf-92c4-6fc271f5ce49-encryption-config\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150342 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54c0b843-e588-4296-a0ab-8272fa1b23e5-serving-cert\") pod \"authentication-operator-69f744f599-wv4hn\" (UID: \"54c0b843-e588-4296-a0ab-8272fa1b23e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150368 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-client-ca\") pod \"route-controller-manager-6576b87f9c-72vpf\" (UID: \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150468 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd9c7\" (UniqueName: \"kubernetes.io/projected/f1bfd5a9-8223-4fb1-b3e3-1acab59d886e-kube-api-access-nd9c7\") pod \"ingress-operator-5b745b69d9-qmkrf\" (UID: \"f1bfd5a9-8223-4fb1-b3e3-1acab59d886e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150499 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150524 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt2km\" (UniqueName: \"kubernetes.io/projected/d3f971e8-68fd-40a7-902a-ba8c6110f14d-kube-api-access-nt2km\") pod \"openshift-config-operator-7777fb866f-7l2ns\" (UID: \"d3f971e8-68fd-40a7-902a-ba8c6110f14d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150567 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b015794a-bfb0-4118-8dae-8861a7ff6a03-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pgkx8\" (UID: \"b015794a-bfb0-4118-8dae-8861a7ff6a03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150563 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-service-ca\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150714 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54c0b843-e588-4296-a0ab-8272fa1b23e5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wv4hn\" (UID: \"54c0b843-e588-4296-a0ab-8272fa1b23e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150758 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/952ee042-cdc6-44b7-8aa0-b24f7e1e1027-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4c8h9\" (UID: \"952ee042-cdc6-44b7-8aa0-b24f7e1e1027\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150761 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150857 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdde3a3a-6d8a-45b8-b718-378bca43559e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhfq\" (UID: \"bdde3a3a-6d8a-45b8-b718-378bca43559e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150909 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln9hn\" (UniqueName: \"kubernetes.io/projected/3ad64404-53a2-45aa-8eba-292f14135379-kube-api-access-ln9hn\") pod \"machine-config-operator-74547568cd-6xjzj\" (UID: \"3ad64404-53a2-45aa-8eba-292f14135379\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.150931 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54c0b843-e588-4296-a0ab-8272fa1b23e5-service-ca-bundle\") pod \"authentication-operator-69f744f599-wv4hn\" (UID: \"54c0b843-e588-4296-a0ab-8272fa1b23e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151001 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151084 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05c9c8a7-f1d1-4d38-b6e3-9079942aaf40-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sllw5\" (UID: \"05c9c8a7-f1d1-4d38-b6e3-9079942aaf40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151171 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2zxs\" (UniqueName: \"kubernetes.io/projected/e69af92e-0d80-428e-930c-fbd86b17643c-kube-api-access-x2zxs\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8n2x\" (UID: \"e69af92e-0d80-428e-930c-fbd86b17643c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151270 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1bfd5a9-8223-4fb1-b3e3-1acab59d886e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qmkrf\" (UID: \"f1bfd5a9-8223-4fb1-b3e3-1acab59d886e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151363 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e183bd5d-f0d0-4254-82d5-240578ae6d1a-socket-dir\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151401 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj7nl\" (UniqueName: \"kubernetes.io/projected/9472f740-2de2-4b00-b109-0b17604c4c9e-kube-api-access-dj7nl\") pod \"olm-operator-6b444d44fb-2wgf7\" (UID: \"9472f740-2de2-4b00-b109-0b17604c4c9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151486 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-client-ca\") pod \"route-controller-manager-6576b87f9c-72vpf\" (UID: \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151511 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5jpn\" (UniqueName: \"kubernetes.io/projected/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-kube-api-access-l5jpn\") pod \"route-controller-manager-6576b87f9c-72vpf\" (UID: \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151492 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54c0b843-e588-4296-a0ab-8272fa1b23e5-service-ca-bundle\") pod \"authentication-operator-69f744f599-wv4hn\" (UID: \"54c0b843-e588-4296-a0ab-8272fa1b23e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151544 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-oauth-serving-cert\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151673 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc6nm\" (UniqueName: \"kubernetes.io/projected/80f22eb7-03e4-4217-939a-1600b43c788a-kube-api-access-sc6nm\") pod \"migrator-59844c95c7-vrj6k\" (UID: \"80f22eb7-03e4-4217-939a-1600b43c788a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrj6k" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151722 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f90e2034-b734-41d4-b7e2-856605c95aff-profile-collector-cert\") pod \"catalog-operator-68c6474976-lw2xw\" (UID: \"f90e2034-b734-41d4-b7e2-856605c95aff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151762 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151795 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9472f740-2de2-4b00-b109-0b17604c4c9e-srv-cert\") pod \"olm-operator-6b444d44fb-2wgf7\" (UID: \"9472f740-2de2-4b00-b109-0b17604c4c9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151827 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6cba5a02-cf5f-4aaa-9c5c-c5979789f000-machine-approver-tls\") pod \"machine-approver-56656f9798-k47sw\" (UID: \"6cba5a02-cf5f-4aaa-9c5c-c5979789f000\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151862 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br2d4\" (UniqueName: \"kubernetes.io/projected/ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08-kube-api-access-br2d4\") pod \"router-default-5444994796-whlx4\" (UID: \"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08\") " pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151898 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f1bfd5a9-8223-4fb1-b3e3-1acab59d886e-metrics-tls\") pod \"ingress-operator-5b745b69d9-qmkrf\" (UID: \"f1bfd5a9-8223-4fb1-b3e3-1acab59d886e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151938 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6917b6e-005e-44cf-92c4-6fc271f5ce49-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.151974 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08-default-certificate\") pod \"router-default-5444994796-whlx4\" (UID: \"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08\") " pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152007 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d3f971e8-68fd-40a7-902a-ba8c6110f14d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7l2ns\" (UID: \"d3f971e8-68fd-40a7-902a-ba8c6110f14d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152038 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p2p8\" (UniqueName: \"kubernetes.io/projected/54c0b843-e588-4296-a0ab-8272fa1b23e5-kube-api-access-4p2p8\") pod \"authentication-operator-69f744f599-wv4hn\" (UID: \"54c0b843-e588-4296-a0ab-8272fa1b23e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:18 crc kubenswrapper[4786]: E0127 13:09:18.152069 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:18.652054158 +0000 UTC m=+141.862668397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152095 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvmwm\" (UniqueName: \"kubernetes.io/projected/8156e329-ca23-4079-8b23-ba0c32cc89a9-kube-api-access-hvmwm\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152132 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7275decb-852d-401b-81b7-affb84126aad-trusted-ca\") pod \"console-operator-58897d9998-qzpj9\" (UID: \"7275decb-852d-401b-81b7-affb84126aad\") " pod="openshift-console-operator/console-operator-58897d9998-qzpj9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152153 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ef0407d-860c-4e18-8dea-9373887d7b88-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152179 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ec75c03-cdcf-4d6e-97cd-453ea4d288f3-apiservice-cert\") pod \"packageserver-d55dfcdfc-5bptf\" (UID: \"2ec75c03-cdcf-4d6e-97cd-453ea4d288f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152201 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3ad64404-53a2-45aa-8eba-292f14135379-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6xjzj\" (UID: \"3ad64404-53a2-45aa-8eba-292f14135379\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152224 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7275decb-852d-401b-81b7-affb84126aad-serving-cert\") pod \"console-operator-58897d9998-qzpj9\" (UID: \"7275decb-852d-401b-81b7-affb84126aad\") " pod="openshift-console-operator/console-operator-58897d9998-qzpj9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152246 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152283 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3ef0407d-860c-4e18-8dea-9373887d7b88-etcd-serving-ca\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152308 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgp8p\" (UniqueName: \"kubernetes.io/projected/f90e2034-b734-41d4-b7e2-856605c95aff-kube-api-access-cgp8p\") pod \"catalog-operator-68c6474976-lw2xw\" (UID: \"f90e2034-b734-41d4-b7e2-856605c95aff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152329 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4pwp\" (UniqueName: \"kubernetes.io/projected/33df9034-8bca-4bf5-ac81-73b8b14ca319-kube-api-access-j4pwp\") pod \"dns-default-ck8s7\" (UID: \"33df9034-8bca-4bf5-ac81-73b8b14ca319\") " pod="openshift-dns/dns-default-ck8s7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152351 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9vft\" (UniqueName: \"kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-kube-api-access-w9vft\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152371 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzjfm\" (UniqueName: \"kubernetes.io/projected/3ef0407d-860c-4e18-8dea-9373887d7b88-kube-api-access-bzjfm\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152427 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znbz6\" (UniqueName: \"kubernetes.io/projected/bb619dcb-2b7a-413f-b136-48e4eec0eb9f-kube-api-access-znbz6\") pod \"cluster-samples-operator-665b6dd947-vszvp\" (UID: \"bb619dcb-2b7a-413f-b136-48e4eec0eb9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vszvp" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152446 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/47f5a0b2-7757-4795-901e-d175d64ebe67-console-serving-cert\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152514 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9b6d90be-b987-4cac-8e4c-7c1afa1abffe-signing-key\") pod \"service-ca-9c57cc56f-8vdpv\" (UID: \"9b6d90be-b987-4cac-8e4c-7c1afa1abffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-8vdpv" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152630 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152658 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa7fb69c-753a-481b-891d-661a2b6b37fd-etcd-service-ca\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152682 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152751 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08-stats-auth\") pod \"router-default-5444994796-whlx4\" (UID: \"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08\") " pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152824 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-client-ca\") pod \"controller-manager-879f6c89f-wtdrs\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152850 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ec75c03-cdcf-4d6e-97cd-453ea4d288f3-webhook-cert\") pod \"packageserver-d55dfcdfc-5bptf\" (UID: \"2ec75c03-cdcf-4d6e-97cd-453ea4d288f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153006 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa7fb69c-753a-481b-891d-661a2b6b37fd-config\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153032 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3ef0407d-860c-4e18-8dea-9373887d7b88-etcd-serving-ca\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153037 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb619dcb-2b7a-413f-b136-48e4eec0eb9f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vszvp\" (UID: \"bb619dcb-2b7a-413f-b136-48e4eec0eb9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vszvp" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153088 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6cba5a02-cf5f-4aaa-9c5c-c5979789f000-auth-proxy-config\") pod \"machine-approver-56656f9798-k47sw\" (UID: \"6cba5a02-cf5f-4aaa-9c5c-c5979789f000\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153109 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63594f44-fa91-43fe-b1da-d1df4f593e45-serving-cert\") pod \"controller-manager-879f6c89f-wtdrs\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153153 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a6917b6e-005e-44cf-92c4-6fc271f5ce49-etcd-client\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153178 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfh6f\" (UniqueName: \"kubernetes.io/projected/6cba5a02-cf5f-4aaa-9c5c-c5979789f000-kube-api-access-xfh6f\") pod \"machine-approver-56656f9798-k47sw\" (UID: \"6cba5a02-cf5f-4aaa-9c5c-c5979789f000\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153200 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jxmx\" (UniqueName: \"kubernetes.io/projected/bdde3a3a-6d8a-45b8-b718-378bca43559e-kube-api-access-4jxmx\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhfq\" (UID: \"bdde3a3a-6d8a-45b8-b718-378bca43559e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153250 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ac1751a-b5c6-46c7-a771-200f41805eea-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-47c2z\" (UID: \"4ac1751a-b5c6-46c7-a771-200f41805eea\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47c2z" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153274 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6adde762-4e97-44eb-a96c-14a79ec7998a-trusted-ca\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153325 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx5sw\" (UniqueName: \"kubernetes.io/projected/5922ee53-d413-4676-ab1e-21f570893009-kube-api-access-kx5sw\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc82m\" (UID: \"5922ee53-d413-4676-ab1e-21f570893009\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153351 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfjfz\" (UniqueName: \"kubernetes.io/projected/3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2-kube-api-access-sfjfz\") pod \"openshift-apiserver-operator-796bbdcf4f-8gmql\" (UID: \"3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153373 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a6917b6e-005e-44cf-92c4-6fc271f5ce49-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153395 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c9c8a7-f1d1-4d38-b6e3-9079942aaf40-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sllw5\" (UID: \"05c9c8a7-f1d1-4d38-b6e3-9079942aaf40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153418 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-serving-cert\") pod \"route-controller-manager-6576b87f9c-72vpf\" (UID: \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153438 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3ef0407d-860c-4e18-8dea-9373887d7b88-image-import-ca\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153457 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3cf531c6-d1a1-4f65-af72-093ffdb034c1-secret-volume\") pod \"collect-profiles-29491980-6pf7c\" (UID: \"3cf531c6-d1a1-4f65-af72-093ffdb034c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153479 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdde3a3a-6d8a-45b8-b718-378bca43559e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhfq\" (UID: \"bdde3a3a-6d8a-45b8-b718-378bca43559e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153500 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05c9c8a7-f1d1-4d38-b6e3-9079942aaf40-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sllw5\" (UID: \"05c9c8a7-f1d1-4d38-b6e3-9079942aaf40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153521 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfgsl\" (UniqueName: \"kubernetes.io/projected/7275decb-852d-401b-81b7-affb84126aad-kube-api-access-zfgsl\") pod \"console-operator-58897d9998-qzpj9\" (UID: \"7275decb-852d-401b-81b7-affb84126aad\") " pod="openshift-console-operator/console-operator-58897d9998-qzpj9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153547 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/47f5a0b2-7757-4795-901e-d175d64ebe67-console-oauth-config\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153570 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e183bd5d-f0d0-4254-82d5-240578ae6d1a-mountpoint-dir\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153592 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4872c606-0857-4266-8cd8-a78c8070b5ca-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-knblz\" (UID: \"4872c606-0857-4266-8cd8-a78c8070b5ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-knblz" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.153816 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6917b6e-005e-44cf-92c4-6fc271f5ce49-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.154522 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ef0407d-860c-4e18-8dea-9373887d7b88-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.154920 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-client-ca\") pod \"controller-manager-879f6c89f-wtdrs\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.155132 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.155222 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d3f971e8-68fd-40a7-902a-ba8c6110f14d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7l2ns\" (UID: \"d3f971e8-68fd-40a7-902a-ba8c6110f14d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.155515 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a6917b6e-005e-44cf-92c4-6fc271f5ce49-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.152373 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54c0b843-e588-4296-a0ab-8272fa1b23e5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wv4hn\" (UID: \"54c0b843-e588-4296-a0ab-8272fa1b23e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.156093 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.156093 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6cba5a02-cf5f-4aaa-9c5c-c5979789f000-machine-approver-tls\") pod \"machine-approver-56656f9798-k47sw\" (UID: \"6cba5a02-cf5f-4aaa-9c5c-c5979789f000\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.156188 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.156215 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7275decb-852d-401b-81b7-affb84126aad-serving-cert\") pod \"console-operator-58897d9998-qzpj9\" (UID: \"7275decb-852d-401b-81b7-affb84126aad\") " pod="openshift-console-operator/console-operator-58897d9998-qzpj9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.156246 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8gmql\" (UID: \"3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.156327 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-oauth-serving-cert\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.156383 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6cba5a02-cf5f-4aaa-9c5c-c5979789f000-auth-proxy-config\") pod \"machine-approver-56656f9798-k47sw\" (UID: \"6cba5a02-cf5f-4aaa-9c5c-c5979789f000\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.157040 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3ef0407d-860c-4e18-8dea-9373887d7b88-encryption-config\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.157240 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7275decb-852d-401b-81b7-affb84126aad-trusted-ca\") pod \"console-operator-58897d9998-qzpj9\" (UID: \"7275decb-852d-401b-81b7-affb84126aad\") " pod="openshift-console-operator/console-operator-58897d9998-qzpj9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.157859 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb619dcb-2b7a-413f-b136-48e4eec0eb9f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vszvp\" (UID: \"bb619dcb-2b7a-413f-b136-48e4eec0eb9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vszvp" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.157895 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6adde762-4e97-44eb-a96c-14a79ec7998a-trusted-ca\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.158065 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3ef0407d-860c-4e18-8dea-9373887d7b88-image-import-ca\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.158818 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-registry-tls\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.159351 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a6917b6e-005e-44cf-92c4-6fc271f5ce49-encryption-config\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.160174 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63594f44-fa91-43fe-b1da-d1df4f593e45-serving-cert\") pod \"controller-manager-879f6c89f-wtdrs\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.161136 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.161258 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-serving-cert\") pod \"route-controller-manager-6576b87f9c-72vpf\" (UID: \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.162577 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/47f5a0b2-7757-4795-901e-d175d64ebe67-console-oauth-config\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.163290 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f1bfd5a9-8223-4fb1-b3e3-1acab59d886e-metrics-tls\") pod \"ingress-operator-5b745b69d9-qmkrf\" (UID: \"f1bfd5a9-8223-4fb1-b3e3-1acab59d886e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.163797 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a6917b6e-005e-44cf-92c4-6fc271f5ce49-etcd-client\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.165476 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.167063 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.167825 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.167869 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef0407d-860c-4e18-8dea-9373887d7b88-serving-cert\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.167888 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3ef0407d-860c-4e18-8dea-9373887d7b88-etcd-client\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.173648 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/47f5a0b2-7757-4795-901e-d175d64ebe67-console-serving-cert\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.193243 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm24p\" (UniqueName: \"kubernetes.io/projected/a6917b6e-005e-44cf-92c4-6fc271f5ce49-kube-api-access-gm24p\") pod \"apiserver-7bbb656c7d-9l4wd\" (UID: \"a6917b6e-005e-44cf-92c4-6fc271f5ce49\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.214845 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65s5z\" (UniqueName: \"kubernetes.io/projected/63594f44-fa91-43fe-b1da-d1df4f593e45-kube-api-access-65s5z\") pod \"controller-manager-879f6c89f-wtdrs\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.232805 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4zkf\" (UniqueName: \"kubernetes.io/projected/b015794a-bfb0-4118-8dae-8861a7ff6a03-kube-api-access-v4zkf\") pod \"machine-api-operator-5694c8668f-pgkx8\" (UID: \"b015794a-bfb0-4118-8dae-8861a7ff6a03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.254841 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbcrj\" (UniqueName: \"kubernetes.io/projected/cbf5f627-0aa5-4a32-840c-f76373e2150e-kube-api-access-pbcrj\") pod \"downloads-7954f5f757-8nm76\" (UID: \"cbf5f627-0aa5-4a32-840c-f76373e2150e\") " pod="openshift-console/downloads-7954f5f757-8nm76" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.254869 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.255001 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ec75c03-cdcf-4d6e-97cd-453ea4d288f3-apiservice-cert\") pod \"packageserver-d55dfcdfc-5bptf\" (UID: \"2ec75c03-cdcf-4d6e-97cd-453ea4d288f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.255242 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3ad64404-53a2-45aa-8eba-292f14135379-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6xjzj\" (UID: \"3ad64404-53a2-45aa-8eba-292f14135379\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" Jan 27 13:09:18 crc kubenswrapper[4786]: E0127 13:09:18.255269 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:18.755234033 +0000 UTC m=+141.965848252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.255450 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgp8p\" (UniqueName: \"kubernetes.io/projected/f90e2034-b734-41d4-b7e2-856605c95aff-kube-api-access-cgp8p\") pod \"catalog-operator-68c6474976-lw2xw\" (UID: \"f90e2034-b734-41d4-b7e2-856605c95aff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.255541 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4pwp\" (UniqueName: \"kubernetes.io/projected/33df9034-8bca-4bf5-ac81-73b8b14ca319-kube-api-access-j4pwp\") pod \"dns-default-ck8s7\" (UID: \"33df9034-8bca-4bf5-ac81-73b8b14ca319\") " pod="openshift-dns/dns-default-ck8s7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.255668 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9b6d90be-b987-4cac-8e4c-7c1afa1abffe-signing-key\") pod \"service-ca-9c57cc56f-8vdpv\" (UID: \"9b6d90be-b987-4cac-8e4c-7c1afa1abffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-8vdpv" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.255760 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa7fb69c-753a-481b-891d-661a2b6b37fd-etcd-service-ca\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.255832 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08-stats-auth\") pod \"router-default-5444994796-whlx4\" (UID: \"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08\") " pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.255901 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ec75c03-cdcf-4d6e-97cd-453ea4d288f3-webhook-cert\") pod \"packageserver-d55dfcdfc-5bptf\" (UID: \"2ec75c03-cdcf-4d6e-97cd-453ea4d288f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.255973 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa7fb69c-753a-481b-891d-661a2b6b37fd-config\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.256048 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ac1751a-b5c6-46c7-a771-200f41805eea-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-47c2z\" (UID: \"4ac1751a-b5c6-46c7-a771-200f41805eea\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47c2z" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.256130 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jxmx\" (UniqueName: \"kubernetes.io/projected/bdde3a3a-6d8a-45b8-b718-378bca43559e-kube-api-access-4jxmx\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhfq\" (UID: \"bdde3a3a-6d8a-45b8-b718-378bca43559e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.256201 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c9c8a7-f1d1-4d38-b6e3-9079942aaf40-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sllw5\" (UID: \"05c9c8a7-f1d1-4d38-b6e3-9079942aaf40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.256273 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3cf531c6-d1a1-4f65-af72-093ffdb034c1-secret-volume\") pod \"collect-profiles-29491980-6pf7c\" (UID: \"3cf531c6-d1a1-4f65-af72-093ffdb034c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.256374 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdde3a3a-6d8a-45b8-b718-378bca43559e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhfq\" (UID: \"bdde3a3a-6d8a-45b8-b718-378bca43559e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.256471 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05c9c8a7-f1d1-4d38-b6e3-9079942aaf40-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sllw5\" (UID: \"05c9c8a7-f1d1-4d38-b6e3-9079942aaf40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.256561 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e183bd5d-f0d0-4254-82d5-240578ae6d1a-mountpoint-dir\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.256696 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4872c606-0857-4266-8cd8-a78c8070b5ca-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-knblz\" (UID: \"4872c606-0857-4266-8cd8-a78c8070b5ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-knblz" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.256794 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgkj8\" (UniqueName: \"kubernetes.io/projected/9b6d90be-b987-4cac-8e4c-7c1afa1abffe-kube-api-access-sgkj8\") pod \"service-ca-9c57cc56f-8vdpv\" (UID: \"9b6d90be-b987-4cac-8e4c-7c1afa1abffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-8vdpv" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.256895 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3ad64404-53a2-45aa-8eba-292f14135379-images\") pod \"machine-config-operator-74547568cd-6xjzj\" (UID: \"3ad64404-53a2-45aa-8eba-292f14135379\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.256997 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b44d5db3-8518-4039-9d50-6d9d991e78bc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h2lh7\" (UID: \"b44d5db3-8518-4039-9d50-6d9d991e78bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.257112 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk4bz\" (UniqueName: \"kubernetes.io/projected/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-kube-api-access-rk4bz\") pod \"marketplace-operator-79b997595-ndd4n\" (UID: \"294ae02f-293d-4db8-9a9f-6d3878c8ccf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.257211 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtgdd\" (UniqueName: \"kubernetes.io/projected/3cf531c6-d1a1-4f65-af72-093ffdb034c1-kube-api-access-qtgdd\") pod \"collect-profiles-29491980-6pf7c\" (UID: \"3cf531c6-d1a1-4f65-af72-093ffdb034c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.257312 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl77z\" (UniqueName: \"kubernetes.io/projected/486083ab-c15d-43da-9eb1-c9736ed02e4e-kube-api-access-hl77z\") pod \"ingress-canary-pq9ft\" (UID: \"486083ab-c15d-43da-9eb1-c9736ed02e4e\") " pod="openshift-ingress-canary/ingress-canary-pq9ft" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.257416 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9b6d90be-b987-4cac-8e4c-7c1afa1abffe-signing-cabundle\") pod \"service-ca-9c57cc56f-8vdpv\" (UID: \"9b6d90be-b987-4cac-8e4c-7c1afa1abffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-8vdpv" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.257526 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0cb545f-a56f-4948-b56d-956edd2501db-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2j2wh\" (UID: \"a0cb545f-a56f-4948-b56d-956edd2501db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.257649 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsqvc\" (UniqueName: \"kubernetes.io/projected/c35b6e44-42c4-4322-a9cd-14a087926529-kube-api-access-tsqvc\") pod \"service-ca-operator-777779d784-mtzzc\" (UID: \"c35b6e44-42c4-4322-a9cd-14a087926529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.257754 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbb6v\" (UniqueName: \"kubernetes.io/projected/4ac1751a-b5c6-46c7-a771-200f41805eea-kube-api-access-zbb6v\") pod \"control-plane-machine-set-operator-78cbb6b69f-47c2z\" (UID: \"4ac1751a-b5c6-46c7-a771-200f41805eea\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47c2z" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.257921 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pclzk\" (UniqueName: \"kubernetes.io/projected/e183bd5d-f0d0-4254-82d5-240578ae6d1a-kube-api-access-pclzk\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.258018 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fa7fb69c-753a-481b-891d-661a2b6b37fd-etcd-ca\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.258106 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fafe1848-6215-4056-92a7-d8032eca6e26-certs\") pod \"machine-config-server-shs56\" (UID: \"fafe1848-6215-4056-92a7-d8032eca6e26\") " pod="openshift-machine-config-operator/machine-config-server-shs56" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.258196 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e183bd5d-f0d0-4254-82d5-240578ae6d1a-plugins-dir\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.258293 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/33df9034-8bca-4bf5-ac81-73b8b14ca319-metrics-tls\") pod \"dns-default-ck8s7\" (UID: \"33df9034-8bca-4bf5-ac81-73b8b14ca319\") " pod="openshift-dns/dns-default-ck8s7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.258405 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp65l\" (UniqueName: \"kubernetes.io/projected/021c92f0-97e6-41aa-9ff3-b14e81d3431a-kube-api-access-sp65l\") pod \"machine-config-controller-84d6567774-wllp5\" (UID: \"021c92f0-97e6-41aa-9ff3-b14e81d3431a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.258499 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f90e2034-b734-41d4-b7e2-856605c95aff-srv-cert\") pod \"catalog-operator-68c6474976-lw2xw\" (UID: \"f90e2034-b734-41d4-b7e2-856605c95aff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.258586 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e183bd5d-f0d0-4254-82d5-240578ae6d1a-csi-data-dir\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.258697 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/952ee042-cdc6-44b7-8aa0-b24f7e1e1027-config\") pod \"kube-apiserver-operator-766d6c64bb-4c8h9\" (UID: \"952ee042-cdc6-44b7-8aa0-b24f7e1e1027\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.258784 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa7fb69c-753a-481b-891d-661a2b6b37fd-serving-cert\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.258890 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/021c92f0-97e6-41aa-9ff3-b14e81d3431a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wllp5\" (UID: \"021c92f0-97e6-41aa-9ff3-b14e81d3431a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.258975 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35b6e44-42c4-4322-a9cd-14a087926529-config\") pod \"service-ca-operator-777779d784-mtzzc\" (UID: \"c35b6e44-42c4-4322-a9cd-14a087926529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.260924 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tdr7\" (UniqueName: \"kubernetes.io/projected/fa7fb69c-753a-481b-891d-661a2b6b37fd-kube-api-access-9tdr7\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.261108 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9472f740-2de2-4b00-b109-0b17604c4c9e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2wgf7\" (UID: \"9472f740-2de2-4b00-b109-0b17604c4c9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.261180 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkcwv\" (UniqueName: \"kubernetes.io/projected/fafe1848-6215-4056-92a7-d8032eca6e26-kube-api-access-zkcwv\") pod \"machine-config-server-shs56\" (UID: \"fafe1848-6215-4056-92a7-d8032eca6e26\") " pod="openshift-machine-config-operator/machine-config-server-shs56" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.261282 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fa7fb69c-753a-481b-891d-661a2b6b37fd-etcd-client\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.261370 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08-service-ca-bundle\") pod \"router-default-5444994796-whlx4\" (UID: \"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08\") " pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.261439 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bdde3a3a-6d8a-45b8-b718-378bca43559e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhfq\" (UID: \"bdde3a3a-6d8a-45b8-b718-378bca43559e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.261512 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/021c92f0-97e6-41aa-9ff3-b14e81d3431a-proxy-tls\") pod \"machine-config-controller-84d6567774-wllp5\" (UID: \"021c92f0-97e6-41aa-9ff3-b14e81d3431a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.261584 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08-metrics-certs\") pod \"router-default-5444994796-whlx4\" (UID: \"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08\") " pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.261685 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrvb4\" (UniqueName: \"kubernetes.io/projected/2ec75c03-cdcf-4d6e-97cd-453ea4d288f3-kube-api-access-lrvb4\") pod \"packageserver-d55dfcdfc-5bptf\" (UID: \"2ec75c03-cdcf-4d6e-97cd-453ea4d288f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.261764 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3ad64404-53a2-45aa-8eba-292f14135379-proxy-tls\") pod \"machine-config-operator-74547568cd-6xjzj\" (UID: \"3ad64404-53a2-45aa-8eba-292f14135379\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.261880 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ndd4n\" (UID: \"294ae02f-293d-4db8-9a9f-6d3878c8ccf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.261954 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ndd4n\" (UID: \"294ae02f-293d-4db8-9a9f-6d3878c8ccf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262025 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33df9034-8bca-4bf5-ac81-73b8b14ca319-config-volume\") pod \"dns-default-ck8s7\" (UID: \"33df9034-8bca-4bf5-ac81-73b8b14ca319\") " pod="openshift-dns/dns-default-ck8s7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262148 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c9c8a7-f1d1-4d38-b6e3-9079942aaf40-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sllw5\" (UID: \"05c9c8a7-f1d1-4d38-b6e3-9079942aaf40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262151 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e183bd5d-f0d0-4254-82d5-240578ae6d1a-registration-dir\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262216 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08-stats-auth\") pod \"router-default-5444994796-whlx4\" (UID: \"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08\") " pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262223 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fafe1848-6215-4056-92a7-d8032eca6e26-node-bootstrap-token\") pod \"machine-config-server-shs56\" (UID: \"fafe1848-6215-4056-92a7-d8032eca6e26\") " pod="openshift-machine-config-operator/machine-config-server-shs56" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262269 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l58p4\" (UniqueName: \"kubernetes.io/projected/4872c606-0857-4266-8cd8-a78c8070b5ca-kube-api-access-l58p4\") pod \"multus-admission-controller-857f4d67dd-knblz\" (UID: \"4872c606-0857-4266-8cd8-a78c8070b5ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-knblz" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262290 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/486083ab-c15d-43da-9eb1-c9736ed02e4e-cert\") pod \"ingress-canary-pq9ft\" (UID: \"486083ab-c15d-43da-9eb1-c9736ed02e4e\") " pod="openshift-ingress-canary/ingress-canary-pq9ft" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262307 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69af92e-0d80-428e-930c-fbd86b17643c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8n2x\" (UID: \"e69af92e-0d80-428e-930c-fbd86b17643c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262325 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b44d5db3-8518-4039-9d50-6d9d991e78bc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h2lh7\" (UID: \"b44d5db3-8518-4039-9d50-6d9d991e78bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262345 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6xvr\" (UniqueName: \"kubernetes.io/projected/a0cb545f-a56f-4948-b56d-956edd2501db-kube-api-access-q6xvr\") pod \"package-server-manager-789f6589d5-2j2wh\" (UID: \"a0cb545f-a56f-4948-b56d-956edd2501db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262363 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44d5db3-8518-4039-9d50-6d9d991e78bc-config\") pod \"kube-controller-manager-operator-78b949d7b-h2lh7\" (UID: \"b44d5db3-8518-4039-9d50-6d9d991e78bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262381 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cf531c6-d1a1-4f65-af72-093ffdb034c1-config-volume\") pod \"collect-profiles-29491980-6pf7c\" (UID: \"3cf531c6-d1a1-4f65-af72-093ffdb034c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262405 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2ec75c03-cdcf-4d6e-97cd-453ea4d288f3-tmpfs\") pod \"packageserver-d55dfcdfc-5bptf\" (UID: \"2ec75c03-cdcf-4d6e-97cd-453ea4d288f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262439 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/952ee042-cdc6-44b7-8aa0-b24f7e1e1027-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4c8h9\" (UID: \"952ee042-cdc6-44b7-8aa0-b24f7e1e1027\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262455 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c35b6e44-42c4-4322-a9cd-14a087926529-serving-cert\") pod \"service-ca-operator-777779d784-mtzzc\" (UID: \"c35b6e44-42c4-4322-a9cd-14a087926529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262516 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e69af92e-0d80-428e-930c-fbd86b17643c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8n2x\" (UID: \"e69af92e-0d80-428e-930c-fbd86b17643c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262544 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/952ee042-cdc6-44b7-8aa0-b24f7e1e1027-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4c8h9\" (UID: \"952ee042-cdc6-44b7-8aa0-b24f7e1e1027\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262591 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdde3a3a-6d8a-45b8-b718-378bca43559e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhfq\" (UID: \"bdde3a3a-6d8a-45b8-b718-378bca43559e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262656 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln9hn\" (UniqueName: \"kubernetes.io/projected/3ad64404-53a2-45aa-8eba-292f14135379-kube-api-access-ln9hn\") pod \"machine-config-operator-74547568cd-6xjzj\" (UID: \"3ad64404-53a2-45aa-8eba-292f14135379\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262679 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05c9c8a7-f1d1-4d38-b6e3-9079942aaf40-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sllw5\" (UID: \"05c9c8a7-f1d1-4d38-b6e3-9079942aaf40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262696 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2zxs\" (UniqueName: \"kubernetes.io/projected/e69af92e-0d80-428e-930c-fbd86b17643c-kube-api-access-x2zxs\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8n2x\" (UID: \"e69af92e-0d80-428e-930c-fbd86b17643c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262734 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e183bd5d-f0d0-4254-82d5-240578ae6d1a-socket-dir\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262761 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj7nl\" (UniqueName: \"kubernetes.io/projected/9472f740-2de2-4b00-b109-0b17604c4c9e-kube-api-access-dj7nl\") pod \"olm-operator-6b444d44fb-2wgf7\" (UID: \"9472f740-2de2-4b00-b109-0b17604c4c9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262823 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262843 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9472f740-2de2-4b00-b109-0b17604c4c9e-srv-cert\") pod \"olm-operator-6b444d44fb-2wgf7\" (UID: \"9472f740-2de2-4b00-b109-0b17604c4c9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262872 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc6nm\" (UniqueName: \"kubernetes.io/projected/80f22eb7-03e4-4217-939a-1600b43c788a-kube-api-access-sc6nm\") pod \"migrator-59844c95c7-vrj6k\" (UID: \"80f22eb7-03e4-4217-939a-1600b43c788a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrj6k" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262889 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f90e2034-b734-41d4-b7e2-856605c95aff-profile-collector-cert\") pod \"catalog-operator-68c6474976-lw2xw\" (UID: \"f90e2034-b734-41d4-b7e2-856605c95aff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262907 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br2d4\" (UniqueName: \"kubernetes.io/projected/ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08-kube-api-access-br2d4\") pod \"router-default-5444994796-whlx4\" (UID: \"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08\") " pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262919 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f90e2034-b734-41d4-b7e2-856605c95aff-srv-cert\") pod \"catalog-operator-68c6474976-lw2xw\" (UID: \"f90e2034-b734-41d4-b7e2-856605c95aff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.262924 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08-default-certificate\") pod \"router-default-5444994796-whlx4\" (UID: \"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08\") " pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.263150 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e183bd5d-f0d0-4254-82d5-240578ae6d1a-registration-dir\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.263275 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e183bd5d-f0d0-4254-82d5-240578ae6d1a-csi-data-dir\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.263441 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fa7fb69c-753a-481b-891d-661a2b6b37fd-etcd-ca\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.263973 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/952ee042-cdc6-44b7-8aa0-b24f7e1e1027-config\") pod \"kube-apiserver-operator-766d6c64bb-4c8h9\" (UID: \"952ee042-cdc6-44b7-8aa0-b24f7e1e1027\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.263989 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0cb545f-a56f-4948-b56d-956edd2501db-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2j2wh\" (UID: \"a0cb545f-a56f-4948-b56d-956edd2501db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.264091 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ac1751a-b5c6-46c7-a771-200f41805eea-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-47c2z\" (UID: \"4ac1751a-b5c6-46c7-a771-200f41805eea\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47c2z" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.261042 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ec75c03-cdcf-4d6e-97cd-453ea4d288f3-webhook-cert\") pod \"packageserver-d55dfcdfc-5bptf\" (UID: \"2ec75c03-cdcf-4d6e-97cd-453ea4d288f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.264197 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08-service-ca-bundle\") pod \"router-default-5444994796-whlx4\" (UID: \"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08\") " pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.264491 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44d5db3-8518-4039-9d50-6d9d991e78bc-config\") pod \"kube-controller-manager-operator-78b949d7b-h2lh7\" (UID: \"b44d5db3-8518-4039-9d50-6d9d991e78bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.264744 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3cf531c6-d1a1-4f65-af72-093ffdb034c1-secret-volume\") pod \"collect-profiles-29491980-6pf7c\" (UID: \"3cf531c6-d1a1-4f65-af72-093ffdb034c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.266092 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdde3a3a-6d8a-45b8-b718-378bca43559e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhfq\" (UID: \"bdde3a3a-6d8a-45b8-b718-378bca43559e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.266340 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e183bd5d-f0d0-4254-82d5-240578ae6d1a-socket-dir\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.256393 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fa7fb69c-753a-481b-891d-661a2b6b37fd-etcd-service-ca\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.258361 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ec75c03-cdcf-4d6e-97cd-453ea4d288f3-apiservice-cert\") pod \"packageserver-d55dfcdfc-5bptf\" (UID: \"2ec75c03-cdcf-4d6e-97cd-453ea4d288f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.266446 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b44d5db3-8518-4039-9d50-6d9d991e78bc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h2lh7\" (UID: \"b44d5db3-8518-4039-9d50-6d9d991e78bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.261379 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3ad64404-53a2-45aa-8eba-292f14135379-images\") pod \"machine-config-operator-74547568cd-6xjzj\" (UID: \"3ad64404-53a2-45aa-8eba-292f14135379\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.266947 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/33df9034-8bca-4bf5-ac81-73b8b14ca319-metrics-tls\") pod \"dns-default-ck8s7\" (UID: \"33df9034-8bca-4bf5-ac81-73b8b14ca319\") " pod="openshift-dns/dns-default-ck8s7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.266970 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08-default-certificate\") pod \"router-default-5444994796-whlx4\" (UID: \"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08\") " pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.266967 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4872c606-0857-4266-8cd8-a78c8070b5ca-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-knblz\" (UID: \"4872c606-0857-4266-8cd8-a78c8070b5ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-knblz" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.256818 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa7fb69c-753a-481b-891d-661a2b6b37fd-config\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.267782 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ndd4n\" (UID: \"294ae02f-293d-4db8-9a9f-6d3878c8ccf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:09:18 crc kubenswrapper[4786]: E0127 13:09:18.267848 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:18.767830139 +0000 UTC m=+141.978444248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.268278 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa7fb69c-753a-481b-891d-661a2b6b37fd-serving-cert\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.255940 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3ad64404-53a2-45aa-8eba-292f14135379-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6xjzj\" (UID: \"3ad64404-53a2-45aa-8eba-292f14135379\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.259903 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e183bd5d-f0d0-4254-82d5-240578ae6d1a-mountpoint-dir\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.260588 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9b6d90be-b987-4cac-8e4c-7c1afa1abffe-signing-cabundle\") pod \"service-ca-9c57cc56f-8vdpv\" (UID: \"9b6d90be-b987-4cac-8e4c-7c1afa1abffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-8vdpv" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.259994 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e183bd5d-f0d0-4254-82d5-240578ae6d1a-plugins-dir\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.268576 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33df9034-8bca-4bf5-ac81-73b8b14ca319-config-volume\") pod \"dns-default-ck8s7\" (UID: \"33df9034-8bca-4bf5-ac81-73b8b14ca319\") " pod="openshift-dns/dns-default-ck8s7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.260585 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9b6d90be-b987-4cac-8e4c-7c1afa1abffe-signing-key\") pod \"service-ca-9c57cc56f-8vdpv\" (UID: \"9b6d90be-b987-4cac-8e4c-7c1afa1abffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-8vdpv" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.269012 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/021c92f0-97e6-41aa-9ff3-b14e81d3431a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wllp5\" (UID: \"021c92f0-97e6-41aa-9ff3-b14e81d3431a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.270675 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05c9c8a7-f1d1-4d38-b6e3-9079942aaf40-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sllw5\" (UID: \"05c9c8a7-f1d1-4d38-b6e3-9079942aaf40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.270719 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/952ee042-cdc6-44b7-8aa0-b24f7e1e1027-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4c8h9\" (UID: \"952ee042-cdc6-44b7-8aa0-b24f7e1e1027\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.271001 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9472f740-2de2-4b00-b109-0b17604c4c9e-srv-cert\") pod \"olm-operator-6b444d44fb-2wgf7\" (UID: \"9472f740-2de2-4b00-b109-0b17604c4c9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.272093 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9472f740-2de2-4b00-b109-0b17604c4c9e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2wgf7\" (UID: \"9472f740-2de2-4b00-b109-0b17604c4c9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.273511 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cf531c6-d1a1-4f65-af72-093ffdb034c1-config-volume\") pod \"collect-profiles-29491980-6pf7c\" (UID: \"3cf531c6-d1a1-4f65-af72-093ffdb034c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.273930 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2ec75c03-cdcf-4d6e-97cd-453ea4d288f3-tmpfs\") pod \"packageserver-d55dfcdfc-5bptf\" (UID: \"2ec75c03-cdcf-4d6e-97cd-453ea4d288f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.273998 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/021c92f0-97e6-41aa-9ff3-b14e81d3431a-proxy-tls\") pod \"machine-config-controller-84d6567774-wllp5\" (UID: \"021c92f0-97e6-41aa-9ff3-b14e81d3431a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.274268 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ndd4n\" (UID: \"294ae02f-293d-4db8-9a9f-6d3878c8ccf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.274563 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35b6e44-42c4-4322-a9cd-14a087926529-config\") pod \"service-ca-operator-777779d784-mtzzc\" (UID: \"c35b6e44-42c4-4322-a9cd-14a087926529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.275200 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08-metrics-certs\") pod \"router-default-5444994796-whlx4\" (UID: \"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08\") " pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.275501 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69af92e-0d80-428e-930c-fbd86b17643c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8n2x\" (UID: \"e69af92e-0d80-428e-930c-fbd86b17643c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.275585 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e69af92e-0d80-428e-930c-fbd86b17643c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8n2x\" (UID: \"e69af92e-0d80-428e-930c-fbd86b17643c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.275699 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c35b6e44-42c4-4322-a9cd-14a087926529-serving-cert\") pod \"service-ca-operator-777779d784-mtzzc\" (UID: \"c35b6e44-42c4-4322-a9cd-14a087926529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.275508 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3ad64404-53a2-45aa-8eba-292f14135379-proxy-tls\") pod \"machine-config-operator-74547568cd-6xjzj\" (UID: \"3ad64404-53a2-45aa-8eba-292f14135379\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.275992 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bdde3a3a-6d8a-45b8-b718-378bca43559e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhfq\" (UID: \"bdde3a3a-6d8a-45b8-b718-378bca43559e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.276361 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fafe1848-6215-4056-92a7-d8032eca6e26-node-bootstrap-token\") pod \"machine-config-server-shs56\" (UID: \"fafe1848-6215-4056-92a7-d8032eca6e26\") " pod="openshift-machine-config-operator/machine-config-server-shs56" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.276521 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fafe1848-6215-4056-92a7-d8032eca6e26-certs\") pod \"machine-config-server-shs56\" (UID: \"fafe1848-6215-4056-92a7-d8032eca6e26\") " pod="openshift-machine-config-operator/machine-config-server-shs56" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.277735 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjfrl\" (UniqueName: \"kubernetes.io/projected/47f5a0b2-7757-4795-901e-d175d64ebe67-kube-api-access-zjfrl\") pod \"console-f9d7485db-vwjp5\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.278195 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f90e2034-b734-41d4-b7e2-856605c95aff-profile-collector-cert\") pod \"catalog-operator-68c6474976-lw2xw\" (UID: \"f90e2034-b734-41d4-b7e2-856605c95aff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.279912 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fa7fb69c-753a-481b-891d-661a2b6b37fd-etcd-client\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.282513 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/486083ab-c15d-43da-9eb1-c9736ed02e4e-cert\") pod \"ingress-canary-pq9ft\" (UID: \"486083ab-c15d-43da-9eb1-c9736ed02e4e\") " pod="openshift-ingress-canary/ingress-canary-pq9ft" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.286917 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8nm76" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.293469 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-bound-sa-token\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.314480 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trlrz\" (UniqueName: \"kubernetes.io/projected/caeaacdd-b085-4196-8fcc-f10ba2b593f7-kube-api-access-trlrz\") pod \"dns-operator-744455d44c-ddxlt\" (UID: \"caeaacdd-b085-4196-8fcc-f10ba2b593f7\") " pod="openshift-dns-operator/dns-operator-744455d44c-ddxlt" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.335528 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd9c7\" (UniqueName: \"kubernetes.io/projected/f1bfd5a9-8223-4fb1-b3e3-1acab59d886e-kube-api-access-nd9c7\") pod \"ingress-operator-5b745b69d9-qmkrf\" (UID: \"f1bfd5a9-8223-4fb1-b3e3-1acab59d886e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.355503 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt2km\" (UniqueName: \"kubernetes.io/projected/d3f971e8-68fd-40a7-902a-ba8c6110f14d-kube-api-access-nt2km\") pod \"openshift-config-operator-7777fb866f-7l2ns\" (UID: \"d3f971e8-68fd-40a7-902a-ba8c6110f14d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.357459 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.363818 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:18 crc kubenswrapper[4786]: E0127 13:09:18.363962 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:18.863931964 +0000 UTC m=+142.074546083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.364117 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: E0127 13:09:18.364658 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:18.864645835 +0000 UTC m=+142.075260054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.374884 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1bfd5a9-8223-4fb1-b3e3-1acab59d886e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qmkrf\" (UID: \"f1bfd5a9-8223-4fb1-b3e3-1acab59d886e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.380990 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.395409 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.396586 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5jpn\" (UniqueName: \"kubernetes.io/projected/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-kube-api-access-l5jpn\") pod \"route-controller-manager-6576b87f9c-72vpf\" (UID: \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.416848 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p2p8\" (UniqueName: \"kubernetes.io/projected/54c0b843-e588-4296-a0ab-8272fa1b23e5-kube-api-access-4p2p8\") pod \"authentication-operator-69f744f599-wv4hn\" (UID: \"54c0b843-e588-4296-a0ab-8272fa1b23e5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.427711 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.440525 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.441401 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvmwm\" (UniqueName: \"kubernetes.io/projected/8156e329-ca23-4079-8b23-ba0c32cc89a9-kube-api-access-hvmwm\") pod \"oauth-openshift-558db77b4-nsc4d\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.459122 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znbz6\" (UniqueName: \"kubernetes.io/projected/bb619dcb-2b7a-413f-b136-48e4eec0eb9f-kube-api-access-znbz6\") pod \"cluster-samples-operator-665b6dd947-vszvp\" (UID: \"bb619dcb-2b7a-413f-b136-48e4eec0eb9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vszvp" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.465019 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:18 crc kubenswrapper[4786]: E0127 13:09:18.465450 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:18.965435808 +0000 UTC m=+142.176049927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.479243 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9vft\" (UniqueName: \"kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-kube-api-access-w9vft\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.483128 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8nm76"] Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.496411 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzjfm\" (UniqueName: \"kubernetes.io/projected/3ef0407d-860c-4e18-8dea-9373887d7b88-kube-api-access-bzjfm\") pod \"apiserver-76f77b778f-j6ww5\" (UID: \"3ef0407d-860c-4e18-8dea-9373887d7b88\") " pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.521837 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.522338 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.524512 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfgsl\" (UniqueName: \"kubernetes.io/projected/7275decb-852d-401b-81b7-affb84126aad-kube-api-access-zfgsl\") pod \"console-operator-58897d9998-qzpj9\" (UID: \"7275decb-852d-401b-81b7-affb84126aad\") " pod="openshift-console-operator/console-operator-58897d9998-qzpj9" Jan 27 13:09:18 crc kubenswrapper[4786]: W0127 13:09:18.525053 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf5f627_0aa5_4a32_840c_f76373e2150e.slice/crio-db093d5f33c38543a36b70a0bd577d92e08b3113c1596a44a19f56644f16f6da WatchSource:0}: Error finding container db093d5f33c38543a36b70a0bd577d92e08b3113c1596a44a19f56644f16f6da: Status 404 returned error can't find the container with id db093d5f33c38543a36b70a0bd577d92e08b3113c1596a44a19f56644f16f6da Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.531579 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vszvp" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.537530 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfh6f\" (UniqueName: \"kubernetes.io/projected/6cba5a02-cf5f-4aaa-9c5c-c5979789f000-kube-api-access-xfh6f\") pod \"machine-approver-56656f9798-k47sw\" (UID: \"6cba5a02-cf5f-4aaa-9c5c-c5979789f000\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.543561 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.552753 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfjfz\" (UniqueName: \"kubernetes.io/projected/3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2-kube-api-access-sfjfz\") pod \"openshift-apiserver-operator-796bbdcf4f-8gmql\" (UID: \"3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.563739 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pgkx8"] Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.569192 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: E0127 13:09:18.569473 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:19.069460889 +0000 UTC m=+142.280075008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.570054 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.573364 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx5sw\" (UniqueName: \"kubernetes.io/projected/5922ee53-d413-4676-ab1e-21f570893009-kube-api-access-kx5sw\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc82m\" (UID: \"5922ee53-d413-4676-ab1e-21f570893009\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.596078 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wtdrs"] Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.597865 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ddxlt" Jan 27 13:09:18 crc kubenswrapper[4786]: W0127 13:09:18.606745 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cba5a02_cf5f_4aaa_9c5c_c5979789f000.slice/crio-0a6c20223f11b62a5bf14777d352e963e978fc1c55b712556836c02cd7df8d5a WatchSource:0}: Error finding container 0a6c20223f11b62a5bf14777d352e963e978fc1c55b712556836c02cd7df8d5a: Status 404 returned error can't find the container with id 0a6c20223f11b62a5bf14777d352e963e978fc1c55b712556836c02cd7df8d5a Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.606909 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.615270 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.616212 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgp8p\" (UniqueName: \"kubernetes.io/projected/f90e2034-b734-41d4-b7e2-856605c95aff-kube-api-access-cgp8p\") pod \"catalog-operator-68c6474976-lw2xw\" (UID: \"f90e2034-b734-41d4-b7e2-856605c95aff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.640293 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.641824 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4pwp\" (UniqueName: \"kubernetes.io/projected/33df9034-8bca-4bf5-ac81-73b8b14ca319-kube-api-access-j4pwp\") pod \"dns-default-ck8s7\" (UID: \"33df9034-8bca-4bf5-ac81-73b8b14ca319\") " pod="openshift-dns/dns-default-ck8s7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.661621 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbb6v\" (UniqueName: \"kubernetes.io/projected/4ac1751a-b5c6-46c7-a771-200f41805eea-kube-api-access-zbb6v\") pod \"control-plane-machine-set-operator-78cbb6b69f-47c2z\" (UID: \"4ac1751a-b5c6-46c7-a771-200f41805eea\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47c2z" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.671879 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd"] Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.674106 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:18 crc kubenswrapper[4786]: E0127 13:09:18.674300 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:19.174281992 +0000 UTC m=+142.384896111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.674625 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: E0127 13:09:18.675023 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:19.175008245 +0000 UTC m=+142.385622364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.677755 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl77z\" (UniqueName: \"kubernetes.io/projected/486083ab-c15d-43da-9eb1-c9736ed02e4e-kube-api-access-hl77z\") pod \"ingress-canary-pq9ft\" (UID: \"486083ab-c15d-43da-9eb1-c9736ed02e4e\") " pod="openshift-ingress-canary/ingress-canary-pq9ft" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.707143 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.707271 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pclzk\" (UniqueName: \"kubernetes.io/projected/e183bd5d-f0d0-4254-82d5-240578ae6d1a-kube-api-access-pclzk\") pod \"csi-hostpathplugin-qvpfk\" (UID: \"e183bd5d-f0d0-4254-82d5-240578ae6d1a\") " pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.707925 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf"] Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.710273 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qzpj9" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.720302 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk4bz\" (UniqueName: \"kubernetes.io/projected/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-kube-api-access-rk4bz\") pod \"marketplace-operator-79b997595-ndd4n\" (UID: \"294ae02f-293d-4db8-9a9f-6d3878c8ccf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.737546 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jxmx\" (UniqueName: \"kubernetes.io/projected/bdde3a3a-6d8a-45b8-b718-378bca43559e-kube-api-access-4jxmx\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhfq\" (UID: \"bdde3a3a-6d8a-45b8-b718-378bca43559e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.745801 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47c2z" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.748582 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wv4hn"] Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.752056 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.760412 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdde3a3a-6d8a-45b8-b718-378bca43559e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lhhfq\" (UID: \"bdde3a3a-6d8a-45b8-b718-378bca43559e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.778862 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.779949 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql" Jan 27 13:09:18 crc kubenswrapper[4786]: E0127 13:09:18.780681 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:19.280662303 +0000 UTC m=+142.491276422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.797227 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nsc4d"] Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.800555 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtgdd\" (UniqueName: \"kubernetes.io/projected/3cf531c6-d1a1-4f65-af72-093ffdb034c1-kube-api-access-qtgdd\") pod \"collect-profiles-29491980-6pf7c\" (UID: \"3cf531c6-d1a1-4f65-af72-093ffdb034c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.800674 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgkj8\" (UniqueName: \"kubernetes.io/projected/9b6d90be-b987-4cac-8e4c-7c1afa1abffe-kube-api-access-sgkj8\") pod \"service-ca-9c57cc56f-8vdpv\" (UID: \"9b6d90be-b987-4cac-8e4c-7c1afa1abffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-8vdpv" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.807847 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.814440 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pq9ft" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.819949 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ck8s7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.829766 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp65l\" (UniqueName: \"kubernetes.io/projected/021c92f0-97e6-41aa-9ff3-b14e81d3431a-kube-api-access-sp65l\") pod \"machine-config-controller-84d6567774-wllp5\" (UID: \"021c92f0-97e6-41aa-9ff3-b14e81d3431a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.842291 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsqvc\" (UniqueName: \"kubernetes.io/projected/c35b6e44-42c4-4322-a9cd-14a087926529-kube-api-access-tsqvc\") pod \"service-ca-operator-777779d784-mtzzc\" (UID: \"c35b6e44-42c4-4322-a9cd-14a087926529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.854078 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns"] Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.858826 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l58p4\" (UniqueName: \"kubernetes.io/projected/4872c606-0857-4266-8cd8-a78c8070b5ca-kube-api-access-l58p4\") pod \"multus-admission-controller-857f4d67dd-knblz\" (UID: \"4872c606-0857-4266-8cd8-a78c8070b5ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-knblz" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.882396 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:18 crc kubenswrapper[4786]: E0127 13:09:18.883013 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:19.382983393 +0000 UTC m=+142.593597582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.887217 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj7nl\" (UniqueName: \"kubernetes.io/projected/9472f740-2de2-4b00-b109-0b17604c4c9e-kube-api-access-dj7nl\") pod \"olm-operator-6b444d44fb-2wgf7\" (UID: \"9472f740-2de2-4b00-b109-0b17604c4c9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.904989 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln9hn\" (UniqueName: \"kubernetes.io/projected/3ad64404-53a2-45aa-8eba-292f14135379-kube-api-access-ln9hn\") pod \"machine-config-operator-74547568cd-6xjzj\" (UID: \"3ad64404-53a2-45aa-8eba-292f14135379\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.920697 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05c9c8a7-f1d1-4d38-b6e3-9079942aaf40-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sllw5\" (UID: \"05c9c8a7-f1d1-4d38-b6e3-9079942aaf40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.925311 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.948542 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2zxs\" (UniqueName: \"kubernetes.io/projected/e69af92e-0d80-428e-930c-fbd86b17643c-kube-api-access-x2zxs\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8n2x\" (UID: \"e69af92e-0d80-428e-930c-fbd86b17643c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.961099 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrvb4\" (UniqueName: \"kubernetes.io/projected/2ec75c03-cdcf-4d6e-97cd-453ea4d288f3-kube-api-access-lrvb4\") pod \"packageserver-d55dfcdfc-5bptf\" (UID: \"2ec75c03-cdcf-4d6e-97cd-453ea4d288f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.966695 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.975765 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b44d5db3-8518-4039-9d50-6d9d991e78bc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h2lh7\" (UID: \"b44d5db3-8518-4039-9d50-6d9d991e78bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.982918 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x" Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.983508 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:18 crc kubenswrapper[4786]: E0127 13:09:18.983917 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:19.48390257 +0000 UTC m=+142.694516679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:18 crc kubenswrapper[4786]: I0127 13:09:18.991533 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:18.998801 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkcwv\" (UniqueName: \"kubernetes.io/projected/fafe1848-6215-4056-92a7-d8032eca6e26-kube-api-access-zkcwv\") pod \"machine-config-server-shs56\" (UID: \"fafe1848-6215-4056-92a7-d8032eca6e26\") " pod="openshift-machine-config-operator/machine-config-server-shs56" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:18.999045 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-knblz" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.017152 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.017333 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/952ee042-cdc6-44b7-8aa0-b24f7e1e1027-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4c8h9\" (UID: \"952ee042-cdc6-44b7-8aa0-b24f7e1e1027\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.030844 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.036691 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tdr7\" (UniqueName: \"kubernetes.io/projected/fa7fb69c-753a-481b-891d-661a2b6b37fd-kube-api-access-9tdr7\") pod \"etcd-operator-b45778765-lr6n6\" (UID: \"fa7fb69c-753a-481b-891d-661a2b6b37fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.046753 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.057583 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br2d4\" (UniqueName: \"kubernetes.io/projected/ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08-kube-api-access-br2d4\") pod \"router-default-5444994796-whlx4\" (UID: \"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08\") " pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.058625 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8vdpv" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.070889 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.079423 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc6nm\" (UniqueName: \"kubernetes.io/projected/80f22eb7-03e4-4217-939a-1600b43c788a-kube-api-access-sc6nm\") pod \"migrator-59844c95c7-vrj6k\" (UID: \"80f22eb7-03e4-4217-939a-1600b43c788a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrj6k" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.080271 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.084930 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:19 crc kubenswrapper[4786]: E0127 13:09:19.085304 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:19.585291962 +0000 UTC m=+142.795906081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.085943 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-shs56" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.098510 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6xvr\" (UniqueName: \"kubernetes.io/projected/a0cb545f-a56f-4948-b56d-956edd2501db-kube-api-access-q6xvr\") pod \"package-server-manager-789f6589d5-2j2wh\" (UID: \"a0cb545f-a56f-4948-b56d-956edd2501db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.189415 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" event={"ID":"a6917b6e-005e-44cf-92c4-6fc271f5ce49","Type":"ContainerStarted","Data":"f63930f6bd5464388359e104c18fb60d0dcc5113fbccb714d490f2b48a1e5ec2"} Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.189747 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:19 crc kubenswrapper[4786]: E0127 13:09:19.190061 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:19.689801006 +0000 UTC m=+142.900415125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.190215 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:19 crc kubenswrapper[4786]: E0127 13:09:19.190814 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:19.690804477 +0000 UTC m=+142.901418596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.203341 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vszvp"] Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.205685 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" event={"ID":"54c0b843-e588-4296-a0ab-8272fa1b23e5","Type":"ContainerStarted","Data":"04caabb47f7dfb72e4de3e3b35c0a34c772b0ec3bf51b873efd5c0f883f89ca6"} Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.206941 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ddxlt"] Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.212912 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" event={"ID":"975cb02b-51f0-4d7b-a59c-b25126c0c1c2","Type":"ContainerStarted","Data":"ee201c2b3040d2acc017a776a90d3656fd6177971cc33c2323cab9cb6f10a000"} Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.220142 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8nm76" event={"ID":"cbf5f627-0aa5-4a32-840c-f76373e2150e","Type":"ContainerStarted","Data":"b2976e81cc75285b8d37f9d89fca011c6c0f08bdc2b8867bf00e48696e67a420"} Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.220184 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8nm76" event={"ID":"cbf5f627-0aa5-4a32-840c-f76373e2150e","Type":"ContainerStarted","Data":"db093d5f33c38543a36b70a0bd577d92e08b3113c1596a44a19f56644f16f6da"} Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.221075 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-8nm76" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.247914 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" event={"ID":"8156e329-ca23-4079-8b23-ba0c32cc89a9","Type":"ContainerStarted","Data":"63d955c9eb944fbc9b633d81f22a480c0b0015118439f0499ec3e88b22dc9f57"} Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.249255 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.249586 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.249827 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.263248 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.269337 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vwjp5"] Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.274702 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrj6k" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.277153 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-8nm76 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.277219 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8nm76" podUID="cbf5f627-0aa5-4a32-840c-f76373e2150e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.291959 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:19 crc kubenswrapper[4786]: E0127 13:09:19.292331 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:19.792316142 +0000 UTC m=+143.002930261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.298572 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf"] Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.312810 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" event={"ID":"b015794a-bfb0-4118-8dae-8861a7ff6a03","Type":"ContainerStarted","Data":"c7404ac3d22b38b74536039a1beef39a4394310e83fbadd3b24606f9c339862d"} Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.314362 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" event={"ID":"b015794a-bfb0-4118-8dae-8861a7ff6a03","Type":"ContainerStarted","Data":"78f8640113104f3eb146aad5acd33237fdabe1d8f249f914f1cbf81729c6825b"} Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.316893 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j6ww5"] Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.317778 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m"] Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.325562 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.334677 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" event={"ID":"d3f971e8-68fd-40a7-902a-ba8c6110f14d","Type":"ContainerStarted","Data":"48a7372aa9365c77fc2f4a40f5d6fd969faa32b8431707ce8b089559eed3c58a"} Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.344021 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" event={"ID":"6cba5a02-cf5f-4aaa-9c5c-c5979789f000","Type":"ContainerStarted","Data":"0a6c20223f11b62a5bf14777d352e963e978fc1c55b712556836c02cd7df8d5a"} Jan 27 13:09:19 crc kubenswrapper[4786]: W0127 13:09:19.345050 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcaeaacdd_b085_4196_8fcc_f10ba2b593f7.slice/crio-bd753523d13c094a06723aace19843d320ae68092e0e523a8fcf2f485b0376d1 WatchSource:0}: Error finding container bd753523d13c094a06723aace19843d320ae68092e0e523a8fcf2f485b0376d1: Status 404 returned error can't find the container with id bd753523d13c094a06723aace19843d320ae68092e0e523a8fcf2f485b0376d1 Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.346017 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" event={"ID":"63594f44-fa91-43fe-b1da-d1df4f593e45","Type":"ContainerStarted","Data":"83da580caeaf7d9b27deb658fabc0d91fde7f6645c1c3939e643056b0d3456cd"} Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.346043 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" event={"ID":"63594f44-fa91-43fe-b1da-d1df4f593e45","Type":"ContainerStarted","Data":"a2913eaee8dc7b9f4e066bb9aa6e221b72763a4d164fc4a390943e5289c8ca5a"} Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.346882 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.366388 4786 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wtdrs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.366447 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" podUID="63594f44-fa91-43fe-b1da-d1df4f593e45" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.393516 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:19 crc kubenswrapper[4786]: E0127 13:09:19.393850 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:19.893831637 +0000 UTC m=+143.104445756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:19 crc kubenswrapper[4786]: W0127 13:09:19.465363 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ef0407d_860c_4e18_8dea_9373887d7b88.slice/crio-1bb49979da26da8349b525136ba0675229f0682267d297792cf7439d7c743c7e WatchSource:0}: Error finding container 1bb49979da26da8349b525136ba0675229f0682267d297792cf7439d7c743c7e: Status 404 returned error can't find the container with id 1bb49979da26da8349b525136ba0675229f0682267d297792cf7439d7c743c7e Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.494673 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:19 crc kubenswrapper[4786]: E0127 13:09:19.494831 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:19.994803586 +0000 UTC m=+143.205417705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.495202 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:19 crc kubenswrapper[4786]: W0127 13:09:19.495755 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5922ee53_d413_4676_ab1e_21f570893009.slice/crio-049a70228e433348f388779729a0f6b42ec161d9fa52260b7eb32c472fb1f651 WatchSource:0}: Error finding container 049a70228e433348f388779729a0f6b42ec161d9fa52260b7eb32c472fb1f651: Status 404 returned error can't find the container with id 049a70228e433348f388779729a0f6b42ec161d9fa52260b7eb32c472fb1f651 Jan 27 13:09:19 crc kubenswrapper[4786]: E0127 13:09:19.496268 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:19.99625989 +0000 UTC m=+143.206874009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.510813 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47c2z"] Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.511098 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw"] Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.557434 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ndd4n"] Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.596020 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:19 crc kubenswrapper[4786]: E0127 13:09:19.596370 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:20.096356112 +0000 UTC m=+143.306970231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.613594 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ck8s7"] Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.640245 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql"] Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.661155 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qzpj9"] Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.664946 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pq9ft"] Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.697413 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:19 crc kubenswrapper[4786]: E0127 13:09:19.698103 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:20.198077284 +0000 UTC m=+143.408691403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.804236 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:19 crc kubenswrapper[4786]: E0127 13:09:19.804338 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:20.304318161 +0000 UTC m=+143.514932290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.804549 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:19 crc kubenswrapper[4786]: E0127 13:09:19.805278 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:20.305267009 +0000 UTC m=+143.515881128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:19 crc kubenswrapper[4786]: I0127 13:09:19.905359 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:19 crc kubenswrapper[4786]: E0127 13:09:19.905663 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:20.40564865 +0000 UTC m=+143.616262769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.007208 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:20 crc kubenswrapper[4786]: E0127 13:09:20.007644 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:20.507622699 +0000 UTC m=+143.718236818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.108429 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:20 crc kubenswrapper[4786]: E0127 13:09:20.108693 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:20.60866188 +0000 UTC m=+143.819275999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.108985 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:20 crc kubenswrapper[4786]: E0127 13:09:20.109467 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:20.609452654 +0000 UTC m=+143.820066773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.138300 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-8nm76" podStartSLOduration=122.138280916 podStartE2EDuration="2m2.138280916s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:20.095793026 +0000 UTC m=+143.306407175" watchObservedRunningTime="2026-01-27 13:09:20.138280916 +0000 UTC m=+143.348895035" Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.210221 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:20 crc kubenswrapper[4786]: E0127 13:09:20.210944 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:20.710922478 +0000 UTC m=+143.921536597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.241360 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" podStartSLOduration=122.241339398 podStartE2EDuration="2m2.241339398s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:20.214174775 +0000 UTC m=+143.424788894" watchObservedRunningTime="2026-01-27 13:09:20.241339398 +0000 UTC m=+143.451953527" Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.307913 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-knblz"] Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.312428 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:20 crc kubenswrapper[4786]: E0127 13:09:20.312742 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:20.812728382 +0000 UTC m=+144.023342491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.326188 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qvpfk"] Jan 27 13:09:20 crc kubenswrapper[4786]: W0127 13:09:20.334216 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4872c606_0857_4266_8cd8_a78c8070b5ca.slice/crio-2c937dd880c9b030a6bccd023126f1005ddfc5d2ae3329f38a608d98f6e6f0af WatchSource:0}: Error finding container 2c937dd880c9b030a6bccd023126f1005ddfc5d2ae3329f38a608d98f6e6f0af: Status 404 returned error can't find the container with id 2c937dd880c9b030a6bccd023126f1005ddfc5d2ae3329f38a608d98f6e6f0af Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.334265 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5"] Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.350203 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5"] Jan 27 13:09:20 crc kubenswrapper[4786]: W0127 13:09:20.351584 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode183bd5d_f0d0_4254_82d5_240578ae6d1a.slice/crio-0f3f93fc3aa6ac0fc591d7cbf599d166a70a422de4dd0e5fdb0c85621fec3c87 WatchSource:0}: Error finding container 0f3f93fc3aa6ac0fc591d7cbf599d166a70a422de4dd0e5fdb0c85621fec3c87: Status 404 returned error can't find the container with id 0f3f93fc3aa6ac0fc591d7cbf599d166a70a422de4dd0e5fdb0c85621fec3c87 Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.354809 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7"] Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.358006 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qzpj9" event={"ID":"7275decb-852d-401b-81b7-affb84126aad","Type":"ContainerStarted","Data":"dafcf797c9285dc6fb6ce19422683192d75976f965000b07e07edcee504ed857"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.360309 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ck8s7" event={"ID":"33df9034-8bca-4bf5-ac81-73b8b14ca319","Type":"ContainerStarted","Data":"3eb9f357aa898efe32d78a834eac398d23973129a74a985eb4ab558dbb1102e5"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.364661 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" event={"ID":"f1bfd5a9-8223-4fb1-b3e3-1acab59d886e","Type":"ContainerStarted","Data":"9b6d2edd5aff1fc62ce4475ea3615eec967511a8c2197e10e48130745b64e3bf"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.364705 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" event={"ID":"f1bfd5a9-8223-4fb1-b3e3-1acab59d886e","Type":"ContainerStarted","Data":"3b9a1dc1e08e286fd4cf9f7d2c9f4d1be6d8462f504813500e6f2d4a1a702c06"} Jan 27 13:09:20 crc kubenswrapper[4786]: W0127 13:09:20.365464 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod021c92f0_97e6_41aa_9ff3_b14e81d3431a.slice/crio-48bcd9bea061b0c9459f7cb9c4cda6cbf93359e43902ce43e341f5c20610b2f8 WatchSource:0}: Error finding container 48bcd9bea061b0c9459f7cb9c4cda6cbf93359e43902ce43e341f5c20610b2f8: Status 404 returned error can't find the container with id 48bcd9bea061b0c9459f7cb9c4cda6cbf93359e43902ce43e341f5c20610b2f8 Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.368726 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" event={"ID":"b015794a-bfb0-4118-8dae-8861a7ff6a03","Type":"ContainerStarted","Data":"37d225bb3c3584c166ecc80a6a419c39653edf0bf07ff5101f0acdec82f7cfbb"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.377190 4786 generic.go:334] "Generic (PLEG): container finished" podID="d3f971e8-68fd-40a7-902a-ba8c6110f14d" containerID="efd0e9fcd510d54d90a774b83ad1acf4a04bc6dc670780c40ff8ef2584de8c82" exitCode=0 Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.377260 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" event={"ID":"d3f971e8-68fd-40a7-902a-ba8c6110f14d","Type":"ContainerDied","Data":"efd0e9fcd510d54d90a774b83ad1acf4a04bc6dc670780c40ff8ef2584de8c82"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.379225 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-whlx4" event={"ID":"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08","Type":"ContainerStarted","Data":"84121a193e3a8d395b63ccee2149b1912a731072698df6c7dbca518fa8be393f"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.382860 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ddxlt" event={"ID":"caeaacdd-b085-4196-8fcc-f10ba2b593f7","Type":"ContainerStarted","Data":"bd753523d13c094a06723aace19843d320ae68092e0e523a8fcf2f485b0376d1"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.396403 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc"] Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.398883 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" event={"ID":"f90e2034-b734-41d4-b7e2-856605c95aff","Type":"ContainerStarted","Data":"654c50840ac27316c71b6b353eb5acb097d3da0091412a7ac0cf399129906550"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.398932 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8vdpv"] Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.398947 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" event={"ID":"f90e2034-b734-41d4-b7e2-856605c95aff","Type":"ContainerStarted","Data":"412d0bb35265a2c2b0b2e6b2893b8c69b2b766478be064c5c24b47d601d0a55e"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.399128 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.409459 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" event={"ID":"294ae02f-293d-4db8-9a9f-6d3878c8ccf9","Type":"ContainerStarted","Data":"ca9f9eda2f6fdfd30b84eec83c329291324add7c7587bc058f038500a3373ece"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.410682 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-knblz" event={"ID":"4872c606-0857-4266-8cd8-a78c8070b5ca","Type":"ContainerStarted","Data":"2c937dd880c9b030a6bccd023126f1005ddfc5d2ae3329f38a608d98f6e6f0af"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.411340 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf"] Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.413209 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:20 crc kubenswrapper[4786]: E0127 13:09:20.413651 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:20.913637059 +0000 UTC m=+144.124251178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.413836 4786 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lw2xw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.413886 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" podUID="f90e2034-b734-41d4-b7e2-856605c95aff" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.415066 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" event={"ID":"3ef0407d-860c-4e18-8dea-9373887d7b88","Type":"ContainerStarted","Data":"1bb49979da26da8349b525136ba0675229f0682267d297792cf7439d7c743c7e"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.418555 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-shs56" event={"ID":"fafe1848-6215-4056-92a7-d8032eca6e26","Type":"ContainerStarted","Data":"904acd6a132e2749e19d5edcc8d74a735dc393b9cb124dba6a00e1e0e048d141"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.418597 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-shs56" event={"ID":"fafe1848-6215-4056-92a7-d8032eca6e26","Type":"ContainerStarted","Data":"5321b0c04a9958a54c3f1053c35baa203c138b8fd75df9557c6b0c84617b03bb"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.420444 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vszvp" event={"ID":"bb619dcb-2b7a-413f-b136-48e4eec0eb9f","Type":"ContainerStarted","Data":"e7692bbd3fbb62eb5e64942f9013f97d6c889343e1d9d8bf3ddd9f8abf947a38"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.420468 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vszvp" event={"ID":"bb619dcb-2b7a-413f-b136-48e4eec0eb9f","Type":"ContainerStarted","Data":"c89bb5569ed64af082bdd0562f28da35ae787b046fe05ccdf969e9a79297142d"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.424467 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql" event={"ID":"3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2","Type":"ContainerStarted","Data":"f1392d4734522e8346eba9f3af5184eb9fac19190a7e11044ab6812e54ca313e"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.426975 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" event={"ID":"6cba5a02-cf5f-4aaa-9c5c-c5979789f000","Type":"ContainerStarted","Data":"7a92d1d191ebf5d71620fe85c5fd71353c8a9bb374ec72118ae54586d828321d"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.427038 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" event={"ID":"6cba5a02-cf5f-4aaa-9c5c-c5979789f000","Type":"ContainerStarted","Data":"a598cbdab0e4dd4b33b61b42765da5a4637f58c6236889ddecf77a9b8c3ec6be"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.434253 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vwjp5" event={"ID":"47f5a0b2-7757-4795-901e-d175d64ebe67","Type":"ContainerStarted","Data":"0fb53450c1bd061f1314b2f0eb86ed25eacddb0a222911ad2e8dc687a44bd735"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.434288 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vwjp5" event={"ID":"47f5a0b2-7757-4795-901e-d175d64ebe67","Type":"ContainerStarted","Data":"6987c095acb536d7d04d579ec708fd38ddcae5dda6b12d8794fdfe05a2c1c0c0"} Jan 27 13:09:20 crc kubenswrapper[4786]: W0127 13:09:20.468368 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc35b6e44_42c4_4322_a9cd_14a087926529.slice/crio-4fd7738bc0ec91283dd0b096d8e68d66002a1fa5a362640af0b69452401a3da3 WatchSource:0}: Error finding container 4fd7738bc0ec91283dd0b096d8e68d66002a1fa5a362640af0b69452401a3da3: Status 404 returned error can't find the container with id 4fd7738bc0ec91283dd0b096d8e68d66002a1fa5a362640af0b69452401a3da3 Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.476936 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m" event={"ID":"5922ee53-d413-4676-ab1e-21f570893009","Type":"ContainerStarted","Data":"7b361ce84ef5437493b4d64e23eb455fe95b45b74c4eb5e12ae616fadc585b47"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.477237 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m" event={"ID":"5922ee53-d413-4676-ab1e-21f570893009","Type":"ContainerStarted","Data":"049a70228e433348f388779729a0f6b42ec161d9fa52260b7eb32c472fb1f651"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.480465 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" event={"ID":"8156e329-ca23-4079-8b23-ba0c32cc89a9","Type":"ContainerStarted","Data":"3dc291bed09e2e266bf31c4257b99aa71c0336fe7c0143d2c4eeb431200b7460"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.480877 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.491672 4786 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-nsc4d container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.491886 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" podUID="8156e329-ca23-4079-8b23-ba0c32cc89a9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.510649 4786 generic.go:334] "Generic (PLEG): container finished" podID="a6917b6e-005e-44cf-92c4-6fc271f5ce49" containerID="26f755f0a2c14969227cb1bfcd75fb2c51c46ff4c3aea033897c8fd7588ffe01" exitCode=0 Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.510794 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" event={"ID":"a6917b6e-005e-44cf-92c4-6fc271f5ce49","Type":"ContainerDied","Data":"26f755f0a2c14969227cb1bfcd75fb2c51c46ff4c3aea033897c8fd7588ffe01"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.516856 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:20 crc kubenswrapper[4786]: E0127 13:09:20.520218 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:21.020201815 +0000 UTC m=+144.230816054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.537569 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" event={"ID":"975cb02b-51f0-4d7b-a59c-b25126c0c1c2","Type":"ContainerStarted","Data":"950c66bdc38069594b8a14e32366ea161d35fb5dfb62df1cba10cdbfec6c6a1d"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.539515 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.561583 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47c2z" event={"ID":"4ac1751a-b5c6-46c7-a771-200f41805eea","Type":"ContainerStarted","Data":"c027fdd6f0cce5824ecd2ac5aeed080b2154aa89f4411401a79396a4e411f5c1"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.563580 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" event={"ID":"54c0b843-e588-4296-a0ab-8272fa1b23e5","Type":"ContainerStarted","Data":"7d9bb6f05764d0fed31909f7da3881fde129bb149b6f950c138c3e927cd2b399"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.575735 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pq9ft" event={"ID":"486083ab-c15d-43da-9eb1-c9736ed02e4e","Type":"ContainerStarted","Data":"28a6a0b12acc6e9d7e6bf3658b56a00c95e68dafb773e6dfff04b16f33d2e4c7"} Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.581785 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-8nm76 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.581851 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8nm76" podUID="cbf5f627-0aa5-4a32-840c-f76373e2150e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.617254 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vrj6k"] Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.619364 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.621411 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lr6n6"] Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.621534 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.621897 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-pgkx8" podStartSLOduration=122.621882476 podStartE2EDuration="2m2.621882476s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:20.613209726 +0000 UTC m=+143.823823845" watchObservedRunningTime="2026-01-27 13:09:20.621882476 +0000 UTC m=+143.832496595" Jan 27 13:09:20 crc kubenswrapper[4786]: E0127 13:09:20.623669 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:21.123645518 +0000 UTC m=+144.334259647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.631261 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7"] Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.640944 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.643486 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh"] Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.655808 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9"] Jan 27 13:09:20 crc kubenswrapper[4786]: E0127 13:09:20.672298 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:21.172277453 +0000 UTC m=+144.382891572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.679642 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq"] Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.715897 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj"] Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.716232 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x"] Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.721506 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c"] Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.740103 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.742094 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:20 crc kubenswrapper[4786]: E0127 13:09:20.743247 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:21.243228234 +0000 UTC m=+144.453842353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:20 crc kubenswrapper[4786]: W0127 13:09:20.770876 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb44d5db3_8518_4039_9d50_6d9d991e78bc.slice/crio-67702490278ac504994ee7d25d5447fc7e44243b3c730d591308de06f9cb49da WatchSource:0}: Error finding container 67702490278ac504994ee7d25d5447fc7e44243b3c730d591308de06f9cb49da: Status 404 returned error can't find the container with id 67702490278ac504994ee7d25d5447fc7e44243b3c730d591308de06f9cb49da Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.845219 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:20 crc kubenswrapper[4786]: E0127 13:09:20.845595 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:21.345581004 +0000 UTC m=+144.556195133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:20 crc kubenswrapper[4786]: W0127 13:09:20.865758 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cf531c6_d1a1_4f65_af72_093ffdb034c1.slice/crio-6711149d338b16dea7727f97cfb21a760cc657c2d8ff37080ceec43b0478fde1 WatchSource:0}: Error finding container 6711149d338b16dea7727f97cfb21a760cc657c2d8ff37080ceec43b0478fde1: Status 404 returned error can't find the container with id 6711149d338b16dea7727f97cfb21a760cc657c2d8ff37080ceec43b0478fde1 Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.873487 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" podStartSLOduration=123.873469328 podStartE2EDuration="2m3.873469328s" podCreationTimestamp="2026-01-27 13:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:20.857162281 +0000 UTC m=+144.067776400" watchObservedRunningTime="2026-01-27 13:09:20.873469328 +0000 UTC m=+144.084083457" Jan 27 13:09:20 crc kubenswrapper[4786]: W0127 13:09:20.886786 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode69af92e_0d80_428e_930c_fbd86b17643c.slice/crio-7c3c08532d0a755dd8e296e7d4bc1bd129afc1da3438c5d57b7bfdb45f085af2 WatchSource:0}: Error finding container 7c3c08532d0a755dd8e296e7d4bc1bd129afc1da3438c5d57b7bfdb45f085af2: Status 404 returned error can't find the container with id 7c3c08532d0a755dd8e296e7d4bc1bd129afc1da3438c5d57b7bfdb45f085af2 Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.900012 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc82m" podStartSLOduration=122.899993671 podStartE2EDuration="2m2.899993671s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:20.898991251 +0000 UTC m=+144.109605370" watchObservedRunningTime="2026-01-27 13:09:20.899993671 +0000 UTC m=+144.110607800" Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.955221 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:20 crc kubenswrapper[4786]: E0127 13:09:20.955581 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:21.455552483 +0000 UTC m=+144.666166602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:20 crc kubenswrapper[4786]: I0127 13:09:20.955765 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:20 crc kubenswrapper[4786]: E0127 13:09:20.956108 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:21.456101519 +0000 UTC m=+144.666715638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.053259 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-wv4hn" podStartSLOduration=124.053237973 podStartE2EDuration="2m4.053237973s" podCreationTimestamp="2026-01-27 13:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:21.051823421 +0000 UTC m=+144.262437540" watchObservedRunningTime="2026-01-27 13:09:21.053237973 +0000 UTC m=+144.263852092" Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.057852 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:21 crc kubenswrapper[4786]: E0127 13:09:21.058086 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:21.558058677 +0000 UTC m=+144.768672796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.058254 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:21 crc kubenswrapper[4786]: E0127 13:09:21.058594 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:21.558582233 +0000 UTC m=+144.769196352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.168952 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:21 crc kubenswrapper[4786]: E0127 13:09:21.169644 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:21.669628403 +0000 UTC m=+144.880242522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.273359 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:21 crc kubenswrapper[4786]: E0127 13:09:21.273718 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:21.773706275 +0000 UTC m=+144.984320394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.356248 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k47sw" podStartSLOduration=124.356102029 podStartE2EDuration="2m4.356102029s" podCreationTimestamp="2026-01-27 13:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:21.354069448 +0000 UTC m=+144.564683567" watchObservedRunningTime="2026-01-27 13:09:21.356102029 +0000 UTC m=+144.566716148" Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.374116 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:21 crc kubenswrapper[4786]: E0127 13:09:21.374543 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:21.874523139 +0000 UTC m=+145.085137258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.458410 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-vwjp5" podStartSLOduration=123.458392567 podStartE2EDuration="2m3.458392567s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:21.457716617 +0000 UTC m=+144.668330746" watchObservedRunningTime="2026-01-27 13:09:21.458392567 +0000 UTC m=+144.669006686" Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.476455 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:21 crc kubenswrapper[4786]: E0127 13:09:21.476914 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:21.976896171 +0000 UTC m=+145.187510290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.549595 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" podStartSLOduration=123.549577074 podStartE2EDuration="2m3.549577074s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:21.549059658 +0000 UTC m=+144.759673777" watchObservedRunningTime="2026-01-27 13:09:21.549577074 +0000 UTC m=+144.760191193" Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.577378 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:21 crc kubenswrapper[4786]: E0127 13:09:21.577693 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:22.077678484 +0000 UTC m=+145.288292603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.627779 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" event={"ID":"d3f971e8-68fd-40a7-902a-ba8c6110f14d","Type":"ContainerStarted","Data":"c459ebcd20281cea478a75117e2315a26b1e8ffb2444db7afd3bb0e7f60058d7"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.628009 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.643956 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47c2z" event={"ID":"4ac1751a-b5c6-46c7-a771-200f41805eea","Type":"ContainerStarted","Data":"a305ba8f520341f49b7cf19ec1ac5be3ed284afbe5c4e2911b5e6cf668d6fd13"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.649534 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc" event={"ID":"c35b6e44-42c4-4322-a9cd-14a087926529","Type":"ContainerStarted","Data":"7286d73cbb105c8a1a22c856ccee029be880536fffd0b551308ebfdf56541478"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.649572 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc" event={"ID":"c35b6e44-42c4-4322-a9cd-14a087926529","Type":"ContainerStarted","Data":"4fd7738bc0ec91283dd0b096d8e68d66002a1fa5a362640af0b69452401a3da3"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.655169 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" event={"ID":"3ad64404-53a2-45aa-8eba-292f14135379","Type":"ContainerStarted","Data":"8afcaff44b5b488dedcb1dd7897dcf8b06483e3aea40bd5a3d4b04c7ec5c4abd"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.655217 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" event={"ID":"3ad64404-53a2-45aa-8eba-292f14135379","Type":"ContainerStarted","Data":"fc2d1c410807253f613a0046e717b1cb81ebac4484544babd740903472e9ae89"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.656197 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-shs56" podStartSLOduration=6.656186412 podStartE2EDuration="6.656186412s" podCreationTimestamp="2026-01-27 13:09:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:21.653677796 +0000 UTC m=+144.864291915" watchObservedRunningTime="2026-01-27 13:09:21.656186412 +0000 UTC m=+144.866800531" Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.684644 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5" event={"ID":"021c92f0-97e6-41aa-9ff3-b14e81d3431a","Type":"ContainerStarted","Data":"441ed874c8732cf34775bcc59dfc101197868628b6e333598245eae631320a78"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.684685 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5" event={"ID":"021c92f0-97e6-41aa-9ff3-b14e81d3431a","Type":"ContainerStarted","Data":"48bcd9bea061b0c9459f7cb9c4cda6cbf93359e43902ce43e341f5c20610b2f8"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.686007 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:21 crc kubenswrapper[4786]: E0127 13:09:21.687415 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:22.187404445 +0000 UTC m=+145.398018564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.717113 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-knblz" event={"ID":"4872c606-0857-4266-8cd8-a78c8070b5ca","Type":"ContainerStarted","Data":"b24f67f8530d421f30d498b9c1c0a6279420d57a0f8ff2013bca5131d3218426"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.721852 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrj6k" event={"ID":"80f22eb7-03e4-4217-939a-1600b43c788a","Type":"ContainerStarted","Data":"2ea3591bb21ea1375f19577c55bb92f4e075179abe22ca5f483206d271ac8222"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.723204 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" event={"ID":"bdde3a3a-6d8a-45b8-b718-378bca43559e","Type":"ContainerStarted","Data":"40d323145d598f72bdaaf455cd19dc393275f4a3e0477eb0fac873663ba39134"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.727555 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" event={"ID":"e183bd5d-f0d0-4254-82d5-240578ae6d1a","Type":"ContainerStarted","Data":"0f3f93fc3aa6ac0fc591d7cbf599d166a70a422de4dd0e5fdb0c85621fec3c87"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.738107 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qzpj9" event={"ID":"7275decb-852d-401b-81b7-affb84126aad","Type":"ContainerStarted","Data":"de092b9d779da38fe000c090f92e1100bfc4e7ffd09e1bfe4366b39ca6074bdb"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.738965 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-qzpj9" Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.739863 4786 patch_prober.go:28] interesting pod/console-operator-58897d9998-qzpj9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.739909 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qzpj9" podUID="7275decb-852d-401b-81b7-affb84126aad" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.756204 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5" event={"ID":"05c9c8a7-f1d1-4d38-b6e3-9079942aaf40","Type":"ContainerStarted","Data":"2bda2f82e7018815bf65f760f5111b17a1b04d031bd10371839347997f534c6e"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.756248 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5" event={"ID":"05c9c8a7-f1d1-4d38-b6e3-9079942aaf40","Type":"ContainerStarted","Data":"cddb4a8b68a9632be5837ad51dcbf552ca2e89abef135275ee3aedd3971b5765"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.768363 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" podStartSLOduration=123.768327235 podStartE2EDuration="2m3.768327235s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:21.7618289 +0000 UTC m=+144.972443029" watchObservedRunningTime="2026-01-27 13:09:21.768327235 +0000 UTC m=+144.978941364" Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.795514 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:21 crc kubenswrapper[4786]: E0127 13:09:21.796398 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:22.296221449 +0000 UTC m=+145.506835578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.796921 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:21 crc kubenswrapper[4786]: E0127 13:09:21.797871 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:22.297858367 +0000 UTC m=+145.508472576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.799401 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" event={"ID":"f1bfd5a9-8223-4fb1-b3e3-1acab59d886e","Type":"ContainerStarted","Data":"2874a5248b766223bab30d3ced995150e4a61774aa57fea009ca255c00f7abb0"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.810826 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47c2z" podStartSLOduration=123.81080669400001 podStartE2EDuration="2m3.810806694s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:21.80799204 +0000 UTC m=+145.018606169" watchObservedRunningTime="2026-01-27 13:09:21.810806694 +0000 UTC m=+145.021420833" Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.836735 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" event={"ID":"fa7fb69c-753a-481b-891d-661a2b6b37fd","Type":"ContainerStarted","Data":"458001612e879af5895e20676f78f00980317626e1a509fa4eca299abc172111"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.866516 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh" event={"ID":"a0cb545f-a56f-4948-b56d-956edd2501db","Type":"ContainerStarted","Data":"2daa75e94b144837ada361d71d342d5b98d1e31c9335dd20c1ca7796b764dcfc"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.866779 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh" event={"ID":"a0cb545f-a56f-4948-b56d-956edd2501db","Type":"ContainerStarted","Data":"af214cf01c56c031de47807adc12f2db152112d8f1ee4ee4d07c88f0d67ce617"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.885575 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mtzzc" podStartSLOduration=123.8855522 podStartE2EDuration="2m3.8855522s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:21.846935735 +0000 UTC m=+145.057549854" watchObservedRunningTime="2026-01-27 13:09:21.8855522 +0000 UTC m=+145.096166319" Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.893191 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" event={"ID":"3cf531c6-d1a1-4f65-af72-093ffdb034c1","Type":"ContainerStarted","Data":"6711149d338b16dea7727f97cfb21a760cc657c2d8ff37080ceec43b0478fde1"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.901512 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:21 crc kubenswrapper[4786]: E0127 13:09:21.902779 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:22.402759064 +0000 UTC m=+145.613373193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.918908 4786 generic.go:334] "Generic (PLEG): container finished" podID="3ef0407d-860c-4e18-8dea-9373887d7b88" containerID="d55c4638fa0ce0aeacae7506fadfba1b14491748fb132df68e78e6138c234e92" exitCode=0 Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.918995 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" event={"ID":"3ef0407d-860c-4e18-8dea-9373887d7b88","Type":"ContainerStarted","Data":"f836576e38e7e62b6577bb9b4b414750694897f9f22330f6a73731ac5723828c"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.919023 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" event={"ID":"3ef0407d-860c-4e18-8dea-9373887d7b88","Type":"ContainerDied","Data":"d55c4638fa0ce0aeacae7506fadfba1b14491748fb132df68e78e6138c234e92"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.926287 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-qzpj9" podStartSLOduration=123.926271967 podStartE2EDuration="2m3.926271967s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:21.893739565 +0000 UTC m=+145.104353684" watchObservedRunningTime="2026-01-27 13:09:21.926271967 +0000 UTC m=+145.136886086" Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.927166 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sllw5" podStartSLOduration=123.927161213 podStartE2EDuration="2m3.927161213s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:21.926560785 +0000 UTC m=+145.137174904" watchObservedRunningTime="2026-01-27 13:09:21.927161213 +0000 UTC m=+145.137775332" Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.942257 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vszvp" event={"ID":"bb619dcb-2b7a-413f-b136-48e4eec0eb9f","Type":"ContainerStarted","Data":"30d50cf5bbcbfea054d70f6f872e52f413433907681bf3759ac1bff8d78ac7d8"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.944767 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7" event={"ID":"b44d5db3-8518-4039-9d50-6d9d991e78bc","Type":"ContainerStarted","Data":"67702490278ac504994ee7d25d5447fc7e44243b3c730d591308de06f9cb49da"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.946018 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pq9ft" event={"ID":"486083ab-c15d-43da-9eb1-c9736ed02e4e","Type":"ContainerStarted","Data":"d5a4df73d79cf1648f5d9bd8d034a67920911d9fdae4c00765b132afe1844009"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.966426 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" podStartSLOduration=123.966405126 podStartE2EDuration="2m3.966405126s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:21.960877421 +0000 UTC m=+145.171491560" watchObservedRunningTime="2026-01-27 13:09:21.966405126 +0000 UTC m=+145.177019245" Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.986801 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" event={"ID":"9472f740-2de2-4b00-b109-0b17604c4c9e","Type":"ContainerStarted","Data":"ea9787d5af061831ed0ee7ae733c5f2e70a6313cfd61134c2e790c512034424a"} Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.987740 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.997196 4786 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-2wgf7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 27 13:09:21 crc kubenswrapper[4786]: I0127 13:09:21.997273 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" podUID="9472f740-2de2-4b00-b109-0b17604c4c9e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.003918 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:22 crc kubenswrapper[4786]: E0127 13:09:22.005553 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:22.505539657 +0000 UTC m=+145.716153776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.058065 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ck8s7" event={"ID":"33df9034-8bca-4bf5-ac81-73b8b14ca319","Type":"ContainerStarted","Data":"8cf18801ba4d49d8dae09f402f81bef9d964988e8c8db6657df1fe596b3b90c6"} Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.061918 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vszvp" podStartSLOduration=125.061896342 podStartE2EDuration="2m5.061896342s" podCreationTimestamp="2026-01-27 13:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:22.061467229 +0000 UTC m=+145.272081348" watchObservedRunningTime="2026-01-27 13:09:22.061896342 +0000 UTC m=+145.272510471" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.062308 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9" event={"ID":"952ee042-cdc6-44b7-8aa0-b24f7e1e1027","Type":"ContainerStarted","Data":"15d08b779383fa380af959c199747395458c16843b58b5931d33f44624508d01"} Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.063857 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-whlx4" event={"ID":"ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08","Type":"ContainerStarted","Data":"3cebe88ef98914a6e8bf82335e33ee270c739f99c84c1edce177bba0917252e2"} Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.072897 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" event={"ID":"2ec75c03-cdcf-4d6e-97cd-453ea4d288f3","Type":"ContainerStarted","Data":"9f93b6d643be92f9a564b49bd7d62aaf3edc8552ff687b6a401825927addcb8c"} Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.072939 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" event={"ID":"2ec75c03-cdcf-4d6e-97cd-453ea4d288f3","Type":"ContainerStarted","Data":"87a778882e30df3129a1a9b81015c136892aa9d88be060847f53270cbc570c5b"} Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.073836 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.074888 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qmkrf" podStartSLOduration=124.07486877 podStartE2EDuration="2m4.07486877s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:21.9989518 +0000 UTC m=+145.209565919" watchObservedRunningTime="2026-01-27 13:09:22.07486877 +0000 UTC m=+145.285482889" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.095469 4786 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5bptf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.095519 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" podUID="2ec75c03-cdcf-4d6e-97cd-453ea4d288f3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.095782 4786 csr.go:261] certificate signing request csr-qsmqk is approved, waiting to be issued Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.095811 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pq9ft" podStartSLOduration=6.095791316 podStartE2EDuration="6.095791316s" podCreationTimestamp="2026-01-27 13:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:22.095764485 +0000 UTC m=+145.306378604" watchObservedRunningTime="2026-01-27 13:09:22.095791316 +0000 UTC m=+145.306405435" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.105216 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:22 crc kubenswrapper[4786]: E0127 13:09:22.105667 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:22.60564622 +0000 UTC m=+145.816260339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.106509 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:22 crc kubenswrapper[4786]: E0127 13:09:22.108491 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:22.608477505 +0000 UTC m=+145.819091624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.113898 4786 csr.go:257] certificate signing request csr-qsmqk is issued Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.160761 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql" event={"ID":"3d6bb6cb-129c-4f02-8afc-4c73ebaefdb2","Type":"ContainerStarted","Data":"c1a204b2aa084fe49f4dae725280db26eace1ecdf99244b5d50d386e7bb672a1"} Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.172502 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ddxlt" event={"ID":"caeaacdd-b085-4196-8fcc-f10ba2b593f7","Type":"ContainerStarted","Data":"5d3b843c2f15d60a11ef7bacbb3f0a48310a04635a109dc4d1d10908a72e05a7"} Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.177890 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" event={"ID":"294ae02f-293d-4db8-9a9f-6d3878c8ccf9","Type":"ContainerStarted","Data":"134f9aafba7557b0fffd00fb582a5deae1327822d1ae4b6a2899a49c060abdd3"} Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.178565 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.185144 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8vdpv" event={"ID":"9b6d90be-b987-4cac-8e4c-7c1afa1abffe","Type":"ContainerStarted","Data":"5225fbfeb691fcf1d78971f81cdf6fe4883071877c426809ad157dbba7232e69"} Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.185185 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8vdpv" event={"ID":"9b6d90be-b987-4cac-8e4c-7c1afa1abffe","Type":"ContainerStarted","Data":"52568db33b931b40b33ccab88f3ea0fa3e29afaeef01340e27caae9310b7ed65"} Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.187309 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-whlx4" podStartSLOduration=124.187283421 podStartE2EDuration="2m4.187283421s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:22.166899472 +0000 UTC m=+145.377513591" watchObservedRunningTime="2026-01-27 13:09:22.187283421 +0000 UTC m=+145.397897540" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.189190 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x" event={"ID":"e69af92e-0d80-428e-930c-fbd86b17643c","Type":"ContainerStarted","Data":"7c3c08532d0a755dd8e296e7d4bc1bd129afc1da3438c5d57b7bfdb45f085af2"} Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.190195 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-8nm76 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.190229 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8nm76" podUID="cbf5f627-0aa5-4a32-840c-f76373e2150e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.207730 4786 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ndd4n container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.207789 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" podUID="294ae02f-293d-4db8-9a9f-6d3878c8ccf9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.207906 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.210370 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" podStartSLOduration=124.210360451 podStartE2EDuration="2m4.210360451s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:22.208438024 +0000 UTC m=+145.419052143" watchObservedRunningTime="2026-01-27 13:09:22.210360451 +0000 UTC m=+145.420974570" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.211477 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lw2xw" Jan 27 13:09:22 crc kubenswrapper[4786]: E0127 13:09:22.210616 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:22.710570598 +0000 UTC m=+145.921184767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.212682 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:22 crc kubenswrapper[4786]: E0127 13:09:22.252134 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:22.752109839 +0000 UTC m=+145.962723958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.265931 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.279587 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" podStartSLOduration=124.279540769 podStartE2EDuration="2m4.279540769s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:22.25011646 +0000 UTC m=+145.460730599" watchObservedRunningTime="2026-01-27 13:09:22.279540769 +0000 UTC m=+145.490154888" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.297435 4786 patch_prober.go:28] interesting pod/router-default-5444994796-whlx4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 13:09:22 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 13:09:22 crc kubenswrapper[4786]: [+]process-running ok Jan 27 13:09:22 crc kubenswrapper[4786]: healthz check failed Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.297489 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whlx4" podUID="ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.317804 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:22 crc kubenswrapper[4786]: E0127 13:09:22.318546 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:22.818505274 +0000 UTC m=+146.029119393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.363148 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-8vdpv" podStartSLOduration=124.363128529 podStartE2EDuration="2m4.363128529s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:22.355685436 +0000 UTC m=+145.566299555" watchObservedRunningTime="2026-01-27 13:09:22.363128529 +0000 UTC m=+145.573742648" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.422941 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:22 crc kubenswrapper[4786]: E0127 13:09:22.423832 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:22.923815834 +0000 UTC m=+146.134429963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.435499 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8gmql" podStartSLOduration=125.435480402 podStartE2EDuration="2m5.435480402s" podCreationTimestamp="2026-01-27 13:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:22.434299716 +0000 UTC m=+145.644913835" watchObservedRunningTime="2026-01-27 13:09:22.435480402 +0000 UTC m=+145.646094521" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.480166 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-ddxlt" podStartSLOduration=124.480125267 podStartE2EDuration="2m4.480125267s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:22.477987783 +0000 UTC m=+145.688601902" watchObservedRunningTime="2026-01-27 13:09:22.480125267 +0000 UTC m=+145.690739386" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.524195 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:22 crc kubenswrapper[4786]: E0127 13:09:22.524630 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:23.024613098 +0000 UTC m=+146.235227217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.540950 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" podStartSLOduration=124.540934536 podStartE2EDuration="2m4.540934536s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:22.539294816 +0000 UTC m=+145.749908945" watchObservedRunningTime="2026-01-27 13:09:22.540934536 +0000 UTC m=+145.751548655" Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.626184 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:22 crc kubenswrapper[4786]: E0127 13:09:22.626667 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:23.126635768 +0000 UTC m=+146.337249887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.728627 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:22 crc kubenswrapper[4786]: E0127 13:09:22.728782 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:23.228755101 +0000 UTC m=+146.439369220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.729239 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:22 crc kubenswrapper[4786]: E0127 13:09:22.729521 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:23.229509744 +0000 UTC m=+146.440123863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.830777 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:22 crc kubenswrapper[4786]: E0127 13:09:22.830966 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:23.330941726 +0000 UTC m=+146.541555845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.831127 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:22 crc kubenswrapper[4786]: E0127 13:09:22.831465 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:23.331437141 +0000 UTC m=+146.542051260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.932697 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:22 crc kubenswrapper[4786]: E0127 13:09:22.933019 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:23.432986528 +0000 UTC m=+146.643600657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:22 crc kubenswrapper[4786]: I0127 13:09:22.933194 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:22 crc kubenswrapper[4786]: E0127 13:09:22.933493 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:23.433478883 +0000 UTC m=+146.644093002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.033881 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:23 crc kubenswrapper[4786]: E0127 13:09:23.034102 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:23.53407086 +0000 UTC m=+146.744684989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.034280 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:23 crc kubenswrapper[4786]: E0127 13:09:23.034639 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:23.534622337 +0000 UTC m=+146.745236456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.062834 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.115916 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 13:04:22 +0000 UTC, rotation deadline is 2026-10-25 03:28:49.697848205 +0000 UTC Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.115955 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6494h19m26.581895126s for next certificate rotation Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.135793 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:23 crc kubenswrapper[4786]: E0127 13:09:23.135999 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:23.635969127 +0000 UTC m=+146.846583256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.136128 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:23 crc kubenswrapper[4786]: E0127 13:09:23.136439 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:23.63642784 +0000 UTC m=+146.847041959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.193983 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" event={"ID":"9472f740-2de2-4b00-b109-0b17604c4c9e","Type":"ContainerStarted","Data":"873ee1afa1743ce0877f8dc3a59268953d173cc28e392659c8371067d7c1df1b"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.194800 4786 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-2wgf7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.194869 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" podUID="9472f740-2de2-4b00-b109-0b17604c4c9e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.195875 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5" event={"ID":"021c92f0-97e6-41aa-9ff3-b14e81d3431a","Type":"ContainerStarted","Data":"806d24ffde5d52bcbb8f392115f567910e4f33671b3d7c093cdcba3a0c4a5765"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.198011 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" event={"ID":"3cf531c6-d1a1-4f65-af72-093ffdb034c1","Type":"ContainerStarted","Data":"32509229f88f585984da1ef764f58e212b324ac35555669edaf5a7111aef9858"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.199647 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" event={"ID":"bdde3a3a-6d8a-45b8-b718-378bca43559e","Type":"ContainerStarted","Data":"5405e1794433ec3e482925fd86e702f0d93949f0d6081041330049c8b656ae15"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.212281 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" event={"ID":"3ef0407d-860c-4e18-8dea-9373887d7b88","Type":"ContainerStarted","Data":"0e62dc82bc3a3a33611c51853b5754e9ca851e39326669f627e727005439f42b"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.216745 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x" event={"ID":"e69af92e-0d80-428e-930c-fbd86b17643c","Type":"ContainerStarted","Data":"245c12f18cd3ed74d88651ebf1cf97e8ddcbc0bf6d8f6c81fa68bd78f24e0339"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.219317 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ck8s7" event={"ID":"33df9034-8bca-4bf5-ac81-73b8b14ca319","Type":"ContainerStarted","Data":"ac46e0ad863752a2f5313da52bbfe286766996f4d5951b80613dec74cf0fc24a"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.219750 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ck8s7" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.222579 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7" event={"ID":"b44d5db3-8518-4039-9d50-6d9d991e78bc","Type":"ContainerStarted","Data":"eed5b306fb605cf0c4c258483b9a28c71dffee334095e7547f60aa6b09a7a073"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.225308 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh" event={"ID":"a0cb545f-a56f-4948-b56d-956edd2501db","Type":"ContainerStarted","Data":"12cd9d3a6cc5e42ee1c938134d4e57d09155d042ca86a972a038322a5a61ba2e"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.225786 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.230767 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ddxlt" event={"ID":"caeaacdd-b085-4196-8fcc-f10ba2b593f7","Type":"ContainerStarted","Data":"a310f085e8491aac90bf84d429dac74dcab62c3be82f8377ac366ba7b1142ff4"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.233014 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrj6k" event={"ID":"80f22eb7-03e4-4217-939a-1600b43c788a","Type":"ContainerStarted","Data":"832a5f66f487f29583b9abd252183f361d67dbd76717262049a6fa7a8f16fab0"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.233044 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrj6k" event={"ID":"80f22eb7-03e4-4217-939a-1600b43c788a","Type":"ContainerStarted","Data":"ab59dc0548f74fcd6fbe5dc8168b4b606960acba525ee5dfb7e91e4e11407f01"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.236091 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9" event={"ID":"952ee042-cdc6-44b7-8aa0-b24f7e1e1027","Type":"ContainerStarted","Data":"d4cb0d3905fa17a705c08c1e4dc63a77b1efd8de566e3aba41eecdabc44af273"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.237036 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:23 crc kubenswrapper[4786]: E0127 13:09:23.237313 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:23.737297796 +0000 UTC m=+146.947911915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.238166 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wllp5" podStartSLOduration=125.238152852 podStartE2EDuration="2m5.238152852s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:23.235373319 +0000 UTC m=+146.445987438" watchObservedRunningTime="2026-01-27 13:09:23.238152852 +0000 UTC m=+146.448766971" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.244098 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" event={"ID":"3ad64404-53a2-45aa-8eba-292f14135379","Type":"ContainerStarted","Data":"a69889315450a8cf1aae081d2499e1131983f6cdee931ec83ab62e7239ffba23"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.247225 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-knblz" event={"ID":"4872c606-0857-4266-8cd8-a78c8070b5ca","Type":"ContainerStarted","Data":"c4045a404438a4dee53316a748a57f67a822f6131448fc5f62ec8324eea4388c"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.253596 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" event={"ID":"a6917b6e-005e-44cf-92c4-6fc271f5ce49","Type":"ContainerStarted","Data":"57aa0daaa0640a1766fe4cbb1b574d8e890af1e815f72018e3902da2782cd066"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.260483 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" event={"ID":"e183bd5d-f0d0-4254-82d5-240578ae6d1a","Type":"ContainerStarted","Data":"e7ab6e0aa5c9f31846648e525d09f129f8fb1f768249fa9268fd627065425f82"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.262729 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" event={"ID":"fa7fb69c-753a-481b-891d-661a2b6b37fd","Type":"ContainerStarted","Data":"9ded9bc030163f73347ad83a2dc961edb691fcfeeed927a8896d224bbd2176be"} Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.268181 4786 patch_prober.go:28] interesting pod/console-operator-58897d9998-qzpj9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.268228 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qzpj9" podUID="7275decb-852d-401b-81b7-affb84126aad" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.268813 4786 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ndd4n container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.268841 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" podUID="294ae02f-293d-4db8-9a9f-6d3878c8ccf9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.270936 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8n2x" podStartSLOduration=125.270921362 podStartE2EDuration="2m5.270921362s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:23.268037096 +0000 UTC m=+146.478651215" watchObservedRunningTime="2026-01-27 13:09:23.270921362 +0000 UTC m=+146.481535481" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.276141 4786 patch_prober.go:28] interesting pod/router-default-5444994796-whlx4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 13:09:23 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 13:09:23 crc kubenswrapper[4786]: [+]process-running ok Jan 27 13:09:23 crc kubenswrapper[4786]: healthz check failed Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.276179 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whlx4" podUID="ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.338486 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:23 crc kubenswrapper[4786]: E0127 13:09:23.343364 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:23.843347647 +0000 UTC m=+147.053961766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.363780 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" podStartSLOduration=126.363762018 podStartE2EDuration="2m6.363762018s" podCreationTimestamp="2026-01-27 13:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:23.334030789 +0000 UTC m=+146.544644908" watchObservedRunningTime="2026-01-27 13:09:23.363762018 +0000 UTC m=+146.574376127" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.397671 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.397927 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.407125 4786 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-9l4wd container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.33:8443/livez\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.407479 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" podUID="a6917b6e-005e-44cf-92c4-6fc271f5ce49" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.33:8443/livez\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.415480 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vrj6k" podStartSLOduration=125.415462154 podStartE2EDuration="2m5.415462154s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:23.413925338 +0000 UTC m=+146.624539467" watchObservedRunningTime="2026-01-27 13:09:23.415462154 +0000 UTC m=+146.626076273" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.422771 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" podStartSLOduration=125.422750191 podStartE2EDuration="2m5.422750191s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:23.3674959 +0000 UTC m=+146.578110019" watchObservedRunningTime="2026-01-27 13:09:23.422750191 +0000 UTC m=+146.633364310" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.441305 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:23 crc kubenswrapper[4786]: E0127 13:09:23.443430 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:23.943381389 +0000 UTC m=+147.153995588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.457747 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sk9mj"] Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.463460 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh" podStartSLOduration=125.463444539 podStartE2EDuration="2m5.463444539s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:23.462962004 +0000 UTC m=+146.673576123" watchObservedRunningTime="2026-01-27 13:09:23.463444539 +0000 UTC m=+146.674058658" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.468458 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.474967 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.487262 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sk9mj"] Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.517276 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h2lh7" podStartSLOduration=125.51726278699999 podStartE2EDuration="2m5.517262787s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:23.515974809 +0000 UTC m=+146.726588928" watchObservedRunningTime="2026-01-27 13:09:23.517262787 +0000 UTC m=+146.727876906" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.546220 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.546287 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-catalog-content\") pod \"community-operators-sk9mj\" (UID: \"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa\") " pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.546345 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tq9s\" (UniqueName: \"kubernetes.io/projected/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-kube-api-access-9tq9s\") pod \"community-operators-sk9mj\" (UID: \"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa\") " pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.546369 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-utilities\") pod \"community-operators-sk9mj\" (UID: \"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa\") " pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:09:23 crc kubenswrapper[4786]: E0127 13:09:23.546684 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:24.046670037 +0000 UTC m=+147.257284156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.611355 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lhhfq" podStartSLOduration=125.611332841 podStartE2EDuration="2m5.611332841s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:23.565925252 +0000 UTC m=+146.776539371" watchObservedRunningTime="2026-01-27 13:09:23.611332841 +0000 UTC m=+146.821946960" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.642783 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.643085 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.643204 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-ck8s7" podStartSLOduration=7.6431812820000005 podStartE2EDuration="7.643181282s" podCreationTimestamp="2026-01-27 13:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:23.607930698 +0000 UTC m=+146.818544817" watchObservedRunningTime="2026-01-27 13:09:23.643181282 +0000 UTC m=+146.853795422" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.643772 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4c8h9" podStartSLOduration=125.64376457 podStartE2EDuration="2m5.64376457s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:23.642939785 +0000 UTC m=+146.853553904" watchObservedRunningTime="2026-01-27 13:09:23.64376457 +0000 UTC m=+146.854378689" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.646987 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.647261 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-catalog-content\") pod \"community-operators-sk9mj\" (UID: \"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa\") " pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.647332 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tq9s\" (UniqueName: \"kubernetes.io/projected/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-kube-api-access-9tq9s\") pod \"community-operators-sk9mj\" (UID: \"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa\") " pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.647364 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-utilities\") pod \"community-operators-sk9mj\" (UID: \"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa\") " pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.647781 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-utilities\") pod \"community-operators-sk9mj\" (UID: \"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa\") " pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:09:23 crc kubenswrapper[4786]: E0127 13:09:23.647845 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:24.147830712 +0000 UTC m=+147.358444831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.648038 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-catalog-content\") pod \"community-operators-sk9mj\" (UID: \"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa\") " pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.649897 4786 patch_prober.go:28] interesting pod/apiserver-76f77b778f-j6ww5 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.649969 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" podUID="3ef0407d-860c-4e18-8dea-9373887d7b88" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.652391 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9lbn2"] Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.656509 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.658897 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.671991 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9lbn2"] Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.709776 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tq9s\" (UniqueName: \"kubernetes.io/projected/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-kube-api-access-9tq9s\") pod \"community-operators-sk9mj\" (UID: \"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa\") " pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.714109 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-lr6n6" podStartSLOduration=125.714084062 podStartE2EDuration="2m5.714084062s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:23.692435936 +0000 UTC m=+146.903050055" watchObservedRunningTime="2026-01-27 13:09:23.714084062 +0000 UTC m=+146.924698181" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.751224 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc465\" (UniqueName: \"kubernetes.io/projected/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-kube-api-access-pc465\") pod \"certified-operators-9lbn2\" (UID: \"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a\") " pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.751259 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-catalog-content\") pod \"certified-operators-9lbn2\" (UID: \"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a\") " pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.751311 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.751330 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-utilities\") pod \"certified-operators-9lbn2\" (UID: \"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a\") " pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:09:23 crc kubenswrapper[4786]: E0127 13:09:23.751712 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:24.251698647 +0000 UTC m=+147.462312766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.795926 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.823550 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-knblz" podStartSLOduration=125.823530164 podStartE2EDuration="2m5.823530164s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:23.746006767 +0000 UTC m=+146.956620886" watchObservedRunningTime="2026-01-27 13:09:23.823530164 +0000 UTC m=+147.034144283" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.856013 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.856292 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc465\" (UniqueName: \"kubernetes.io/projected/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-kube-api-access-pc465\") pod \"certified-operators-9lbn2\" (UID: \"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a\") " pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.856317 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-catalog-content\") pod \"certified-operators-9lbn2\" (UID: \"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a\") " pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.856368 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-utilities\") pod \"certified-operators-9lbn2\" (UID: \"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a\") " pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.856922 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-utilities\") pod \"certified-operators-9lbn2\" (UID: \"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a\") " pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:09:23 crc kubenswrapper[4786]: E0127 13:09:23.857014 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:24.356997235 +0000 UTC m=+147.567611354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.857047 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-catalog-content\") pod \"certified-operators-9lbn2\" (UID: \"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a\") " pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.898045 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" podStartSLOduration=125.898030993 podStartE2EDuration="2m5.898030993s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:23.896740514 +0000 UTC m=+147.107354653" watchObservedRunningTime="2026-01-27 13:09:23.898030993 +0000 UTC m=+147.108645112" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.913731 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cvqtc"] Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.919761 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.946553 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cvqtc"] Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.960943 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:23 crc kubenswrapper[4786]: E0127 13:09:23.961384 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:24.461367887 +0000 UTC m=+147.671982006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:23 crc kubenswrapper[4786]: I0127 13:09:23.969823 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc465\" (UniqueName: \"kubernetes.io/projected/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-kube-api-access-pc465\") pod \"certified-operators-9lbn2\" (UID: \"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a\") " pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.035898 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.060651 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z4pv9"] Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.061645 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.062293 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.062449 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f2a321-e151-44e9-a16b-4c9e7b883b64-utilities\") pod \"community-operators-cvqtc\" (UID: \"90f2a321-e151-44e9-a16b-4c9e7b883b64\") " pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.062486 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f2a321-e151-44e9-a16b-4c9e7b883b64-catalog-content\") pod \"community-operators-cvqtc\" (UID: \"90f2a321-e151-44e9-a16b-4c9e7b883b64\") " pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.062554 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b9gh\" (UniqueName: \"kubernetes.io/projected/90f2a321-e151-44e9-a16b-4c9e7b883b64-kube-api-access-2b9gh\") pod \"community-operators-cvqtc\" (UID: \"90f2a321-e151-44e9-a16b-4c9e7b883b64\") " pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:09:24 crc kubenswrapper[4786]: E0127 13:09:24.062670 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:24.562656695 +0000 UTC m=+147.773270814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.109698 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z4pv9"] Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.112104 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xjzj" podStartSLOduration=126.112092613 podStartE2EDuration="2m6.112092613s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:24.109949849 +0000 UTC m=+147.320563968" watchObservedRunningTime="2026-01-27 13:09:24.112092613 +0000 UTC m=+147.322706732" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.163367 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f2a321-e151-44e9-a16b-4c9e7b883b64-catalog-content\") pod \"community-operators-cvqtc\" (UID: \"90f2a321-e151-44e9-a16b-4c9e7b883b64\") " pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.163412 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c25e7dc-42ad-4c09-8187-354fd9f6d954-catalog-content\") pod \"certified-operators-z4pv9\" (UID: \"5c25e7dc-42ad-4c09-8187-354fd9f6d954\") " pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.163460 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c25e7dc-42ad-4c09-8187-354fd9f6d954-utilities\") pod \"certified-operators-z4pv9\" (UID: \"5c25e7dc-42ad-4c09-8187-354fd9f6d954\") " pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.163485 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svqb2\" (UniqueName: \"kubernetes.io/projected/5c25e7dc-42ad-4c09-8187-354fd9f6d954-kube-api-access-svqb2\") pod \"certified-operators-z4pv9\" (UID: \"5c25e7dc-42ad-4c09-8187-354fd9f6d954\") " pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.163522 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b9gh\" (UniqueName: \"kubernetes.io/projected/90f2a321-e151-44e9-a16b-4c9e7b883b64-kube-api-access-2b9gh\") pod \"community-operators-cvqtc\" (UID: \"90f2a321-e151-44e9-a16b-4c9e7b883b64\") " pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.163540 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.163573 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f2a321-e151-44e9-a16b-4c9e7b883b64-utilities\") pod \"community-operators-cvqtc\" (UID: \"90f2a321-e151-44e9-a16b-4c9e7b883b64\") " pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.163980 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f2a321-e151-44e9-a16b-4c9e7b883b64-utilities\") pod \"community-operators-cvqtc\" (UID: \"90f2a321-e151-44e9-a16b-4c9e7b883b64\") " pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.164222 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f2a321-e151-44e9-a16b-4c9e7b883b64-catalog-content\") pod \"community-operators-cvqtc\" (UID: \"90f2a321-e151-44e9-a16b-4c9e7b883b64\") " pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:09:24 crc kubenswrapper[4786]: E0127 13:09:24.164695 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:24.664684275 +0000 UTC m=+147.875298394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.212895 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b9gh\" (UniqueName: \"kubernetes.io/projected/90f2a321-e151-44e9-a16b-4c9e7b883b64-kube-api-access-2b9gh\") pod \"community-operators-cvqtc\" (UID: \"90f2a321-e151-44e9-a16b-4c9e7b883b64\") " pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.262375 4786 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7l2ns container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.262437 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" podUID="d3f971e8-68fd-40a7-902a-ba8c6110f14d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.264690 4786 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5bptf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.264744 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" podUID="2ec75c03-cdcf-4d6e-97cd-453ea4d288f3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.265477 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.265707 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c25e7dc-42ad-4c09-8187-354fd9f6d954-catalog-content\") pod \"certified-operators-z4pv9\" (UID: \"5c25e7dc-42ad-4c09-8187-354fd9f6d954\") " pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.265760 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c25e7dc-42ad-4c09-8187-354fd9f6d954-utilities\") pod \"certified-operators-z4pv9\" (UID: \"5c25e7dc-42ad-4c09-8187-354fd9f6d954\") " pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.265781 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svqb2\" (UniqueName: \"kubernetes.io/projected/5c25e7dc-42ad-4c09-8187-354fd9f6d954-kube-api-access-svqb2\") pod \"certified-operators-z4pv9\" (UID: \"5c25e7dc-42ad-4c09-8187-354fd9f6d954\") " pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:09:24 crc kubenswrapper[4786]: E0127 13:09:24.266138 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:24.766124818 +0000 UTC m=+147.976738937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.266513 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c25e7dc-42ad-4c09-8187-354fd9f6d954-catalog-content\") pod \"certified-operators-z4pv9\" (UID: \"5c25e7dc-42ad-4c09-8187-354fd9f6d954\") " pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.266709 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c25e7dc-42ad-4c09-8187-354fd9f6d954-utilities\") pod \"certified-operators-z4pv9\" (UID: \"5c25e7dc-42ad-4c09-8187-354fd9f6d954\") " pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.279074 4786 patch_prober.go:28] interesting pod/router-default-5444994796-whlx4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 13:09:24 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 13:09:24 crc kubenswrapper[4786]: [+]process-running ok Jan 27 13:09:24 crc kubenswrapper[4786]: healthz check failed Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.279408 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whlx4" podUID="ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.282888 4786 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ndd4n container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.282922 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" podUID="294ae02f-293d-4db8-9a9f-6d3878c8ccf9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.292888 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.307384 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svqb2\" (UniqueName: \"kubernetes.io/projected/5c25e7dc-42ad-4c09-8187-354fd9f6d954-kube-api-access-svqb2\") pod \"certified-operators-z4pv9\" (UID: \"5c25e7dc-42ad-4c09-8187-354fd9f6d954\") " pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.313813 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2wgf7" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.329954 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-qzpj9" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.367931 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:24 crc kubenswrapper[4786]: E0127 13:09:24.373004 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:24.872991334 +0000 UTC m=+148.083605453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.401660 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.474285 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.474791 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.474836 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.474876 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.474893 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:09:24 crc kubenswrapper[4786]: E0127 13:09:24.478738 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:24.978710044 +0000 UTC m=+148.189324163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.490481 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.492354 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.499039 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.502624 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.572501 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7l2ns" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.588443 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:24 crc kubenswrapper[4786]: E0127 13:09:24.588758 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:25.088747075 +0000 UTC m=+148.299361194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.690056 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:24 crc kubenswrapper[4786]: E0127 13:09:24.690669 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:25.190643992 +0000 UTC m=+148.401258111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.691376 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.725950 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.751977 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:09:24 crc kubenswrapper[4786]: E0127 13:09:24.810764 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:25.310719072 +0000 UTC m=+148.521333191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.815907 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.876442 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9lbn2"] Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.920142 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:24 crc kubenswrapper[4786]: E0127 13:09:24.920452 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:25.420436752 +0000 UTC m=+148.631050871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:24 crc kubenswrapper[4786]: I0127 13:09:24.980719 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sk9mj"] Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.021519 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:25 crc kubenswrapper[4786]: E0127 13:09:25.021914 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:25.521900826 +0000 UTC m=+148.732514945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.130348 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:25 crc kubenswrapper[4786]: E0127 13:09:25.131041 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:25.631021949 +0000 UTC m=+148.841636068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.232126 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:25 crc kubenswrapper[4786]: E0127 13:09:25.232803 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:25.732789941 +0000 UTC m=+148.943404060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.282843 4786 patch_prober.go:28] interesting pod/router-default-5444994796-whlx4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 13:09:25 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 13:09:25 crc kubenswrapper[4786]: [+]process-running ok Jan 27 13:09:25 crc kubenswrapper[4786]: healthz check failed Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.282900 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whlx4" podUID="ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.287167 4786 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5bptf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.287229 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" podUID="2ec75c03-cdcf-4d6e-97cd-453ea4d288f3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.339125 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" event={"ID":"e183bd5d-f0d0-4254-82d5-240578ae6d1a","Type":"ContainerStarted","Data":"7793aca4606bb0f220985b2971f4213480e01454308dd95daae50098a381990c"} Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.339773 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:25 crc kubenswrapper[4786]: E0127 13:09:25.340112 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:25.84009529 +0000 UTC m=+149.050709409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.342579 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cvqtc"] Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.391631 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sk9mj" event={"ID":"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa","Type":"ContainerStarted","Data":"33a3c9c01c9026134810c43756681d8fc8b192b15c7ffc08be083a5a65bd792d"} Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.449817 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lbn2" event={"ID":"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a","Type":"ContainerStarted","Data":"19bea142f35f5555677c0091beec7c95a65816312437eb41ef285ad33e458194"} Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.452474 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:25 crc kubenswrapper[4786]: E0127 13:09:25.455746 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:25.955729498 +0000 UTC m=+149.166343617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.554155 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:25 crc kubenswrapper[4786]: E0127 13:09:25.554442 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:26.054427328 +0000 UTC m=+149.265041447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.657097 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:25 crc kubenswrapper[4786]: E0127 13:09:25.657765 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:26.157752108 +0000 UTC m=+149.368366217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.671775 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2js8d"] Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.673006 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.684322 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.702331 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2js8d"] Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.766452 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.766697 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9332175b-3747-40d6-892d-1c126a05b0c2-utilities\") pod \"redhat-marketplace-2js8d\" (UID: \"9332175b-3747-40d6-892d-1c126a05b0c2\") " pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.766730 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9332175b-3747-40d6-892d-1c126a05b0c2-catalog-content\") pod \"redhat-marketplace-2js8d\" (UID: \"9332175b-3747-40d6-892d-1c126a05b0c2\") " pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.766771 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrz4x\" (UniqueName: \"kubernetes.io/projected/9332175b-3747-40d6-892d-1c126a05b0c2-kube-api-access-mrz4x\") pod \"redhat-marketplace-2js8d\" (UID: \"9332175b-3747-40d6-892d-1c126a05b0c2\") " pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:09:25 crc kubenswrapper[4786]: E0127 13:09:25.766929 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:26.266912762 +0000 UTC m=+149.477526881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.819808 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z4pv9"] Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.868171 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9332175b-3747-40d6-892d-1c126a05b0c2-catalog-content\") pod \"redhat-marketplace-2js8d\" (UID: \"9332175b-3747-40d6-892d-1c126a05b0c2\") " pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.868231 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrz4x\" (UniqueName: \"kubernetes.io/projected/9332175b-3747-40d6-892d-1c126a05b0c2-kube-api-access-mrz4x\") pod \"redhat-marketplace-2js8d\" (UID: \"9332175b-3747-40d6-892d-1c126a05b0c2\") " pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.868283 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.868350 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9332175b-3747-40d6-892d-1c126a05b0c2-utilities\") pod \"redhat-marketplace-2js8d\" (UID: \"9332175b-3747-40d6-892d-1c126a05b0c2\") " pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.869015 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9332175b-3747-40d6-892d-1c126a05b0c2-utilities\") pod \"redhat-marketplace-2js8d\" (UID: \"9332175b-3747-40d6-892d-1c126a05b0c2\") " pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.869283 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9332175b-3747-40d6-892d-1c126a05b0c2-catalog-content\") pod \"redhat-marketplace-2js8d\" (UID: \"9332175b-3747-40d6-892d-1c126a05b0c2\") " pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:09:25 crc kubenswrapper[4786]: E0127 13:09:25.869574 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:26.369545581 +0000 UTC m=+149.580159710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.907583 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrz4x\" (UniqueName: \"kubernetes.io/projected/9332175b-3747-40d6-892d-1c126a05b0c2-kube-api-access-mrz4x\") pod \"redhat-marketplace-2js8d\" (UID: \"9332175b-3747-40d6-892d-1c126a05b0c2\") " pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:09:25 crc kubenswrapper[4786]: I0127 13:09:25.976104 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:25 crc kubenswrapper[4786]: E0127 13:09:25.976454 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:26.476425047 +0000 UTC m=+149.687039166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.044544 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-57tgc"] Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.045467 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.068974 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.077158 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-57tgc"] Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.079249 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-utilities\") pod \"redhat-marketplace-57tgc\" (UID: \"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0\") " pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.079299 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-catalog-content\") pod \"redhat-marketplace-57tgc\" (UID: \"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0\") " pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.079358 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.079394 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dxcs\" (UniqueName: \"kubernetes.io/projected/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-kube-api-access-8dxcs\") pod \"redhat-marketplace-57tgc\" (UID: \"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0\") " pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:09:26 crc kubenswrapper[4786]: E0127 13:09:26.080119 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:26.580107957 +0000 UTC m=+149.790722076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.180213 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.180733 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dxcs\" (UniqueName: \"kubernetes.io/projected/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-kube-api-access-8dxcs\") pod \"redhat-marketplace-57tgc\" (UID: \"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0\") " pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.180812 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-utilities\") pod \"redhat-marketplace-57tgc\" (UID: \"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0\") " pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.180850 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-catalog-content\") pod \"redhat-marketplace-57tgc\" (UID: \"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0\") " pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.181327 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-catalog-content\") pod \"redhat-marketplace-57tgc\" (UID: \"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0\") " pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:09:26 crc kubenswrapper[4786]: E0127 13:09:26.181410 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:26.681393105 +0000 UTC m=+149.892007224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.181953 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-utilities\") pod \"redhat-marketplace-57tgc\" (UID: \"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0\") " pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.222732 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dxcs\" (UniqueName: \"kubernetes.io/projected/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-kube-api-access-8dxcs\") pod \"redhat-marketplace-57tgc\" (UID: \"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0\") " pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.231875 4786 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.283247 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:26 crc kubenswrapper[4786]: E0127 13:09:26.283712 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:26.783694554 +0000 UTC m=+149.994308673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.289140 4786 patch_prober.go:28] interesting pod/router-default-5444994796-whlx4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 13:09:26 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 13:09:26 crc kubenswrapper[4786]: [+]process-running ok Jan 27 13:09:26 crc kubenswrapper[4786]: healthz check failed Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.289216 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whlx4" podUID="ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 13:09:26 crc kubenswrapper[4786]: W0127 13:09:26.354143 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-e9fc589c4233c7415897d62181e99e6d78b343afaa5f737af4354a936b26e066 WatchSource:0}: Error finding container e9fc589c4233c7415897d62181e99e6d78b343afaa5f737af4354a936b26e066: Status 404 returned error can't find the container with id e9fc589c4233c7415897d62181e99e6d78b343afaa5f737af4354a936b26e066 Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.384463 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:26 crc kubenswrapper[4786]: E0127 13:09:26.385053 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:26.885033174 +0000 UTC m=+150.095647293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.413671 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.485360 4786 generic.go:334] "Generic (PLEG): container finished" podID="90f2a321-e151-44e9-a16b-4c9e7b883b64" containerID="5eb8622633cebe2da1fcb88a323fbb80c75557d3933239f10766c08ed58587b8" exitCode=0 Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.485447 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cvqtc" event={"ID":"90f2a321-e151-44e9-a16b-4c9e7b883b64","Type":"ContainerDied","Data":"5eb8622633cebe2da1fcb88a323fbb80c75557d3933239f10766c08ed58587b8"} Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.485475 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cvqtc" event={"ID":"90f2a321-e151-44e9-a16b-4c9e7b883b64","Type":"ContainerStarted","Data":"1e9fd40d373accaf411287f25bcfa3b0311874c5b0a2e67a308da7dbffc48dde"} Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.486940 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:26 crc kubenswrapper[4786]: E0127 13:09:26.487306 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 13:09:26.987294761 +0000 UTC m=+150.197908880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qrqjl" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.489791 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.497420 4786 generic.go:334] "Generic (PLEG): container finished" podID="dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" containerID="e35dea6d7c1a4193324a1a86efe987d57fdf93aa7f50bfdfbdf8cf688eac24d5" exitCode=0 Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.497492 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lbn2" event={"ID":"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a","Type":"ContainerDied","Data":"e35dea6d7c1a4193324a1a86efe987d57fdf93aa7f50bfdfbdf8cf688eac24d5"} Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.501821 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2js8d"] Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.528060 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" event={"ID":"e183bd5d-f0d0-4254-82d5-240578ae6d1a","Type":"ContainerStarted","Data":"bb18acfaff25f593aeee72a8f724c21109c4715a6a85811efb527b567b41c5ab"} Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.537455 4786 generic.go:334] "Generic (PLEG): container finished" podID="f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" containerID="09033df3b85ae61565835703d1efae15f3d622d48cbcf9b3d5f1c5f6e0d75b1c" exitCode=0 Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.537512 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sk9mj" event={"ID":"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa","Type":"ContainerDied","Data":"09033df3b85ae61565835703d1efae15f3d622d48cbcf9b3d5f1c5f6e0d75b1c"} Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.552764 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e9fc589c4233c7415897d62181e99e6d78b343afaa5f737af4354a936b26e066"} Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.562899 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0c33ac68557712aade799212429d027e50ef4f0e93ebb191521a3569e8c4a57a"} Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.562942 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e87f26aaffe87327daf8711c098d042b92a37ad6e95169b07e565c3d78ff4294"} Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.582813 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6a15ddc2cfa659c1198aa0b311efe278a1207553b5e2cf71d7f2fd897e341797"} Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.582844 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"15ae1115ab5010ffa1c3b65e548514cad8cfbc6238598e5891c9fce3dd38ccd4"} Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.588444 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:26 crc kubenswrapper[4786]: E0127 13:09:26.590702 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 13:09:27.090679052 +0000 UTC m=+150.301293231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.591298 4786 generic.go:334] "Generic (PLEG): container finished" podID="5c25e7dc-42ad-4c09-8187-354fd9f6d954" containerID="1bda6593ad146c6023d7d712e85bbbebb052d0c58d3e721678080531ef07d809" exitCode=0 Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.591412 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4pv9" event={"ID":"5c25e7dc-42ad-4c09-8187-354fd9f6d954","Type":"ContainerDied","Data":"1bda6593ad146c6023d7d712e85bbbebb052d0c58d3e721678080531ef07d809"} Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.591505 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4pv9" event={"ID":"5c25e7dc-42ad-4c09-8187-354fd9f6d954","Type":"ContainerStarted","Data":"d42d3a84d47378eb5227212eb88a930f3e4eefe0c66ebd91b5d41ecbada36f66"} Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.635441 4786 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T13:09:26.231900585Z","Handler":null,"Name":""} Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.642644 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f44tb"] Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.644159 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.646803 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.649860 4786 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.649893 4786 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.666081 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f44tb"] Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.699241 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ceca1f-028f-4e16-8c4c-1fe9094598c8-utilities\") pod \"redhat-operators-f44tb\" (UID: \"35ceca1f-028f-4e16-8c4c-1fe9094598c8\") " pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.699425 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ceca1f-028f-4e16-8c4c-1fe9094598c8-catalog-content\") pod \"redhat-operators-f44tb\" (UID: \"35ceca1f-028f-4e16-8c4c-1fe9094598c8\") " pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.701112 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.701256 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrcx9\" (UniqueName: \"kubernetes.io/projected/35ceca1f-028f-4e16-8c4c-1fe9094598c8-kube-api-access-qrcx9\") pod \"redhat-operators-f44tb\" (UID: \"35ceca1f-028f-4e16-8c4c-1fe9094598c8\") " pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.721697 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-57tgc"] Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.721707 4786 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.721842 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.746269 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qrqjl\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:26 crc kubenswrapper[4786]: W0127 13:09:26.782573 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29f8e8ac_fdcb_43f3_8128_d8ca56bc84d0.slice/crio-99e1286ec251848f73824c5159681d5feb2465f651ffdfa4c8ae77d0a9513f5e WatchSource:0}: Error finding container 99e1286ec251848f73824c5159681d5feb2465f651ffdfa4c8ae77d0a9513f5e: Status 404 returned error can't find the container with id 99e1286ec251848f73824c5159681d5feb2465f651ffdfa4c8ae77d0a9513f5e Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.802349 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.803031 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ceca1f-028f-4e16-8c4c-1fe9094598c8-catalog-content\") pod \"redhat-operators-f44tb\" (UID: \"35ceca1f-028f-4e16-8c4c-1fe9094598c8\") " pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.803088 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrcx9\" (UniqueName: \"kubernetes.io/projected/35ceca1f-028f-4e16-8c4c-1fe9094598c8-kube-api-access-qrcx9\") pod \"redhat-operators-f44tb\" (UID: \"35ceca1f-028f-4e16-8c4c-1fe9094598c8\") " pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.803111 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ceca1f-028f-4e16-8c4c-1fe9094598c8-utilities\") pod \"redhat-operators-f44tb\" (UID: \"35ceca1f-028f-4e16-8c4c-1fe9094598c8\") " pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.803497 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ceca1f-028f-4e16-8c4c-1fe9094598c8-utilities\") pod \"redhat-operators-f44tb\" (UID: \"35ceca1f-028f-4e16-8c4c-1fe9094598c8\") " pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.803759 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ceca1f-028f-4e16-8c4c-1fe9094598c8-catalog-content\") pod \"redhat-operators-f44tb\" (UID: \"35ceca1f-028f-4e16-8c4c-1fe9094598c8\") " pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.823332 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.824002 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrcx9\" (UniqueName: \"kubernetes.io/projected/35ceca1f-028f-4e16-8c4c-1fe9094598c8-kube-api-access-qrcx9\") pod \"redhat-operators-f44tb\" (UID: \"35ceca1f-028f-4e16-8c4c-1fe9094598c8\") " pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.955320 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:26 crc kubenswrapper[4786]: I0127 13:09:26.986673 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.046414 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ccwr8"] Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.047857 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.063018 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccwr8"] Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.108541 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-utilities\") pod \"redhat-operators-ccwr8\" (UID: \"acf3913c-c965-4da8-91b3-6d4f2fa40f6b\") " pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.108646 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-catalog-content\") pod \"redhat-operators-ccwr8\" (UID: \"acf3913c-c965-4da8-91b3-6d4f2fa40f6b\") " pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.108678 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b6xt\" (UniqueName: \"kubernetes.io/projected/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-kube-api-access-5b6xt\") pod \"redhat-operators-ccwr8\" (UID: \"acf3913c-c965-4da8-91b3-6d4f2fa40f6b\") " pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.210630 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qrqjl"] Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.230671 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b6xt\" (UniqueName: \"kubernetes.io/projected/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-kube-api-access-5b6xt\") pod \"redhat-operators-ccwr8\" (UID: \"acf3913c-c965-4da8-91b3-6d4f2fa40f6b\") " pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.231040 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-utilities\") pod \"redhat-operators-ccwr8\" (UID: \"acf3913c-c965-4da8-91b3-6d4f2fa40f6b\") " pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.231232 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-catalog-content\") pod \"redhat-operators-ccwr8\" (UID: \"acf3913c-c965-4da8-91b3-6d4f2fa40f6b\") " pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.232416 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-catalog-content\") pod \"redhat-operators-ccwr8\" (UID: \"acf3913c-c965-4da8-91b3-6d4f2fa40f6b\") " pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.234011 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-utilities\") pod \"redhat-operators-ccwr8\" (UID: \"acf3913c-c965-4da8-91b3-6d4f2fa40f6b\") " pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:09:27 crc kubenswrapper[4786]: W0127 13:09:27.246815 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6adde762_4e97_44eb_a96c_14a79ec7998a.slice/crio-99c0216ee1db2fa0acebbece627e56ff8682441f61c9a857ae28bb7bdc41094c WatchSource:0}: Error finding container 99c0216ee1db2fa0acebbece627e56ff8682441f61c9a857ae28bb7bdc41094c: Status 404 returned error can't find the container with id 99c0216ee1db2fa0acebbece627e56ff8682441f61c9a857ae28bb7bdc41094c Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.265235 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b6xt\" (UniqueName: \"kubernetes.io/projected/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-kube-api-access-5b6xt\") pod \"redhat-operators-ccwr8\" (UID: \"acf3913c-c965-4da8-91b3-6d4f2fa40f6b\") " pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.274748 4786 patch_prober.go:28] interesting pod/router-default-5444994796-whlx4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 13:09:27 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 13:09:27 crc kubenswrapper[4786]: [+]process-running ok Jan 27 13:09:27 crc kubenswrapper[4786]: healthz check failed Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.274872 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whlx4" podUID="ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.318865 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f44tb"] Jan 27 13:09:27 crc kubenswrapper[4786]: W0127 13:09:27.356401 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35ceca1f_028f_4e16_8c4c_1fe9094598c8.slice/crio-e7dc1f88ce9f17010641ca2cd3eed47cfe0904fce64393209c8e5dba3178699a WatchSource:0}: Error finding container e7dc1f88ce9f17010641ca2cd3eed47cfe0904fce64393209c8e5dba3178699a: Status 404 returned error can't find the container with id e7dc1f88ce9f17010641ca2cd3eed47cfe0904fce64393209c8e5dba3178699a Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.415969 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.478953 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.585711 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.586796 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.590353 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.591164 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.595992 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.608458 4786 generic.go:334] "Generic (PLEG): container finished" podID="29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0" containerID="cb55c564a54399cde840731a7273b833678db36cf42e9cca3a4e22c67e5106f7" exitCode=0 Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.608549 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57tgc" event={"ID":"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0","Type":"ContainerDied","Data":"cb55c564a54399cde840731a7273b833678db36cf42e9cca3a4e22c67e5106f7"} Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.608588 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57tgc" event={"ID":"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0","Type":"ContainerStarted","Data":"99e1286ec251848f73824c5159681d5feb2465f651ffdfa4c8ae77d0a9513f5e"} Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.619151 4786 generic.go:334] "Generic (PLEG): container finished" podID="9332175b-3747-40d6-892d-1c126a05b0c2" containerID="a79bad1016e1610bf76fe702871c6688f8ded8472ae11401cb0c9654938f4c29" exitCode=0 Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.619390 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2js8d" event={"ID":"9332175b-3747-40d6-892d-1c126a05b0c2","Type":"ContainerDied","Data":"a79bad1016e1610bf76fe702871c6688f8ded8472ae11401cb0c9654938f4c29"} Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.619499 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2js8d" event={"ID":"9332175b-3747-40d6-892d-1c126a05b0c2","Type":"ContainerStarted","Data":"c4e42739448583e4dd78bc479edce44169eaa9951d2dc5323d3e049348ed8dba"} Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.626168 4786 generic.go:334] "Generic (PLEG): container finished" podID="35ceca1f-028f-4e16-8c4c-1fe9094598c8" containerID="3efabdb6d8a89f5532693df7f7f81e9ac055dcedabca2538e148a8652e1b7e93" exitCode=0 Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.626251 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f44tb" event={"ID":"35ceca1f-028f-4e16-8c4c-1fe9094598c8","Type":"ContainerDied","Data":"3efabdb6d8a89f5532693df7f7f81e9ac055dcedabca2538e148a8652e1b7e93"} Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.626285 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f44tb" event={"ID":"35ceca1f-028f-4e16-8c4c-1fe9094598c8","Type":"ContainerStarted","Data":"e7dc1f88ce9f17010641ca2cd3eed47cfe0904fce64393209c8e5dba3178699a"} Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.636376 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e37df03-3728-4e81-8cdf-55cfe1e59391-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5e37df03-3728-4e81-8cdf-55cfe1e59391\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.636503 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e37df03-3728-4e81-8cdf-55cfe1e59391-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5e37df03-3728-4e81-8cdf-55cfe1e59391\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.656229 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" event={"ID":"e183bd5d-f0d0-4254-82d5-240578ae6d1a","Type":"ContainerStarted","Data":"18bf120e816a67e367e425248e640c3af4004b10949e4152debe29c325cf37ce"} Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.664214 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" event={"ID":"6adde762-4e97-44eb-a96c-14a79ec7998a","Type":"ContainerStarted","Data":"48c8498cc54d432918b9729360a9e03e4b79eab632adaf0228e242007174bb14"} Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.664257 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" event={"ID":"6adde762-4e97-44eb-a96c-14a79ec7998a","Type":"ContainerStarted","Data":"99c0216ee1db2fa0acebbece627e56ff8682441f61c9a857ae28bb7bdc41094c"} Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.664959 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.680064 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2d96200432143a64a5271cbf4a27bcd1661445da9c212fc4e9a94a76e3ffc87d"} Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.680838 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.723074 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" podStartSLOduration=129.72305055 podStartE2EDuration="2m9.72305055s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:27.718592177 +0000 UTC m=+150.929206326" watchObservedRunningTime="2026-01-27 13:09:27.72305055 +0000 UTC m=+150.933664669" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.742202 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e37df03-3728-4e81-8cdf-55cfe1e59391-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5e37df03-3728-4e81-8cdf-55cfe1e59391\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.742333 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e37df03-3728-4e81-8cdf-55cfe1e59391-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5e37df03-3728-4e81-8cdf-55cfe1e59391\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.743733 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e37df03-3728-4e81-8cdf-55cfe1e59391-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5e37df03-3728-4e81-8cdf-55cfe1e59391\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.759133 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qvpfk" podStartSLOduration=11.759109918 podStartE2EDuration="11.759109918s" podCreationTimestamp="2026-01-27 13:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:27.758406378 +0000 UTC m=+150.969020507" watchObservedRunningTime="2026-01-27 13:09:27.759109918 +0000 UTC m=+150.969724057" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.780442 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e37df03-3728-4e81-8cdf-55cfe1e59391-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5e37df03-3728-4e81-8cdf-55cfe1e59391\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.835556 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccwr8"] Jan 27 13:09:27 crc kubenswrapper[4786]: W0127 13:09:27.910749 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacf3913c_c965_4da8_91b3_6d4f2fa40f6b.slice/crio-fa0187f534f7a1ed3337a60c3d3a74b93b7b62d951d656d1d1e9acccb87bbe42 WatchSource:0}: Error finding container fa0187f534f7a1ed3337a60c3d3a74b93b7b62d951d656d1d1e9acccb87bbe42: Status 404 returned error can't find the container with id fa0187f534f7a1ed3337a60c3d3a74b93b7b62d951d656d1d1e9acccb87bbe42 Jan 27 13:09:27 crc kubenswrapper[4786]: I0127 13:09:27.942709 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.276282 4786 patch_prober.go:28] interesting pod/router-default-5444994796-whlx4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 13:09:28 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 13:09:28 crc kubenswrapper[4786]: [+]process-running ok Jan 27 13:09:28 crc kubenswrapper[4786]: healthz check failed Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.276804 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whlx4" podUID="ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.287942 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-8nm76 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.288019 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8nm76" podUID="cbf5f627-0aa5-4a32-840c-f76373e2150e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.288062 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-8nm76 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.288147 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8nm76" podUID="cbf5f627-0aa5-4a32-840c-f76373e2150e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.403299 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.409531 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9l4wd" Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.560435 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.572064 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.572582 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.573738 4786 patch_prober.go:28] interesting pod/console-f9d7485db-vwjp5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.573789 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vwjp5" podUID="47f5a0b2-7757-4795-901e-d175d64ebe67" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.663284 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.672455 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-j6ww5" Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.761734 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.827030 4786 generic.go:334] "Generic (PLEG): container finished" podID="acf3913c-c965-4da8-91b3-6d4f2fa40f6b" containerID="99368af4aa647e5a971892be6f4ce7bb7458c82c864d55c4c54d3c0984c1bf79" exitCode=0 Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.827131 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccwr8" event={"ID":"acf3913c-c965-4da8-91b3-6d4f2fa40f6b","Type":"ContainerDied","Data":"99368af4aa647e5a971892be6f4ce7bb7458c82c864d55c4c54d3c0984c1bf79"} Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.827159 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccwr8" event={"ID":"acf3913c-c965-4da8-91b3-6d4f2fa40f6b","Type":"ContainerStarted","Data":"fa0187f534f7a1ed3337a60c3d3a74b93b7b62d951d656d1d1e9acccb87bbe42"} Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.828733 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5e37df03-3728-4e81-8cdf-55cfe1e59391","Type":"ContainerStarted","Data":"cd4c04e8fe3c854e336fb9db33aa69864b618da62d8652f1b0ffe25bc2865689"} Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.874997 4786 generic.go:334] "Generic (PLEG): container finished" podID="3cf531c6-d1a1-4f65-af72-093ffdb034c1" containerID="32509229f88f585984da1ef764f58e212b324ac35555669edaf5a7111aef9858" exitCode=0 Jan 27 13:09:28 crc kubenswrapper[4786]: I0127 13:09:28.875521 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" event={"ID":"3cf531c6-d1a1-4f65-af72-093ffdb034c1","Type":"ContainerDied","Data":"32509229f88f585984da1ef764f58e212b324ac35555669edaf5a7111aef9858"} Jan 27 13:09:29 crc kubenswrapper[4786]: I0127 13:09:29.064252 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5bptf" Jan 27 13:09:29 crc kubenswrapper[4786]: I0127 13:09:29.264007 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:29 crc kubenswrapper[4786]: I0127 13:09:29.269825 4786 patch_prober.go:28] interesting pod/router-default-5444994796-whlx4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 13:09:29 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Jan 27 13:09:29 crc kubenswrapper[4786]: [+]process-running ok Jan 27 13:09:29 crc kubenswrapper[4786]: healthz check failed Jan 27 13:09:29 crc kubenswrapper[4786]: I0127 13:09:29.269896 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whlx4" podUID="ddd8d0e9-9f75-4f78-9d96-9373fdaa9c08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 13:09:29 crc kubenswrapper[4786]: I0127 13:09:29.895929 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5e37df03-3728-4e81-8cdf-55cfe1e59391","Type":"ContainerStarted","Data":"44ce52c90b1e063ae72224bfaddefd05efaefe8fb2c6b6f70c4c6394bbb4a0f4"} Jan 27 13:09:29 crc kubenswrapper[4786]: I0127 13:09:29.912670 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.9126541980000002 podStartE2EDuration="2.912654198s" podCreationTimestamp="2026-01-27 13:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:29.911960637 +0000 UTC m=+153.122574756" watchObservedRunningTime="2026-01-27 13:09:29.912654198 +0000 UTC m=+153.123268317" Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.164195 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.234130 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cf531c6-d1a1-4f65-af72-093ffdb034c1-config-volume\") pod \"3cf531c6-d1a1-4f65-af72-093ffdb034c1\" (UID: \"3cf531c6-d1a1-4f65-af72-093ffdb034c1\") " Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.234247 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtgdd\" (UniqueName: \"kubernetes.io/projected/3cf531c6-d1a1-4f65-af72-093ffdb034c1-kube-api-access-qtgdd\") pod \"3cf531c6-d1a1-4f65-af72-093ffdb034c1\" (UID: \"3cf531c6-d1a1-4f65-af72-093ffdb034c1\") " Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.234343 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3cf531c6-d1a1-4f65-af72-093ffdb034c1-secret-volume\") pod \"3cf531c6-d1a1-4f65-af72-093ffdb034c1\" (UID: \"3cf531c6-d1a1-4f65-af72-093ffdb034c1\") " Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.235080 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cf531c6-d1a1-4f65-af72-093ffdb034c1-config-volume" (OuterVolumeSpecName: "config-volume") pod "3cf531c6-d1a1-4f65-af72-093ffdb034c1" (UID: "3cf531c6-d1a1-4f65-af72-093ffdb034c1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.241987 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf531c6-d1a1-4f65-af72-093ffdb034c1-kube-api-access-qtgdd" (OuterVolumeSpecName: "kube-api-access-qtgdd") pod "3cf531c6-d1a1-4f65-af72-093ffdb034c1" (UID: "3cf531c6-d1a1-4f65-af72-093ffdb034c1"). InnerVolumeSpecName "kube-api-access-qtgdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.242029 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cf531c6-d1a1-4f65-af72-093ffdb034c1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3cf531c6-d1a1-4f65-af72-093ffdb034c1" (UID: "3cf531c6-d1a1-4f65-af72-093ffdb034c1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.335703 4786 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3cf531c6-d1a1-4f65-af72-093ffdb034c1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.335734 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cf531c6-d1a1-4f65-af72-093ffdb034c1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.335743 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtgdd\" (UniqueName: \"kubernetes.io/projected/3cf531c6-d1a1-4f65-af72-093ffdb034c1-kube-api-access-qtgdd\") on node \"crc\" DevicePath \"\"" Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.350336 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.353959 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-whlx4" Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.829261 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ck8s7" Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.931929 4786 generic.go:334] "Generic (PLEG): container finished" podID="5e37df03-3728-4e81-8cdf-55cfe1e59391" containerID="44ce52c90b1e063ae72224bfaddefd05efaefe8fb2c6b6f70c4c6394bbb4a0f4" exitCode=0 Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.932062 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5e37df03-3728-4e81-8cdf-55cfe1e59391","Type":"ContainerDied","Data":"44ce52c90b1e063ae72224bfaddefd05efaefe8fb2c6b6f70c4c6394bbb4a0f4"} Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.943799 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.943813 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c" event={"ID":"3cf531c6-d1a1-4f65-af72-093ffdb034c1","Type":"ContainerDied","Data":"6711149d338b16dea7727f97cfb21a760cc657c2d8ff37080ceec43b0478fde1"} Jan 27 13:09:30 crc kubenswrapper[4786]: I0127 13:09:30.943914 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6711149d338b16dea7727f97cfb21a760cc657c2d8ff37080ceec43b0478fde1" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.024668 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 13:09:32 crc kubenswrapper[4786]: E0127 13:09:32.025201 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf531c6-d1a1-4f65-af72-093ffdb034c1" containerName="collect-profiles" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.025241 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf531c6-d1a1-4f65-af72-093ffdb034c1" containerName="collect-profiles" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.025877 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf531c6-d1a1-4f65-af72-093ffdb034c1" containerName="collect-profiles" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.028121 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.028242 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.030855 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.031272 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.174236 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/593c7cfb-1e0d-403e-aabe-4e39fb0bba88-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"593c7cfb-1e0d-403e-aabe-4e39fb0bba88\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.174820 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/593c7cfb-1e0d-403e-aabe-4e39fb0bba88-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"593c7cfb-1e0d-403e-aabe-4e39fb0bba88\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.290276 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/593c7cfb-1e0d-403e-aabe-4e39fb0bba88-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"593c7cfb-1e0d-403e-aabe-4e39fb0bba88\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.290392 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/593c7cfb-1e0d-403e-aabe-4e39fb0bba88-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"593c7cfb-1e0d-403e-aabe-4e39fb0bba88\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.290461 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/593c7cfb-1e0d-403e-aabe-4e39fb0bba88-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"593c7cfb-1e0d-403e-aabe-4e39fb0bba88\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.299779 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.313617 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/593c7cfb-1e0d-403e-aabe-4e39fb0bba88-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"593c7cfb-1e0d-403e-aabe-4e39fb0bba88\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.355148 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.393751 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e37df03-3728-4e81-8cdf-55cfe1e59391-kubelet-dir\") pod \"5e37df03-3728-4e81-8cdf-55cfe1e59391\" (UID: \"5e37df03-3728-4e81-8cdf-55cfe1e59391\") " Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.393892 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e37df03-3728-4e81-8cdf-55cfe1e59391-kube-api-access\") pod \"5e37df03-3728-4e81-8cdf-55cfe1e59391\" (UID: \"5e37df03-3728-4e81-8cdf-55cfe1e59391\") " Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.395169 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e37df03-3728-4e81-8cdf-55cfe1e59391-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5e37df03-3728-4e81-8cdf-55cfe1e59391" (UID: "5e37df03-3728-4e81-8cdf-55cfe1e59391"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.419514 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e37df03-3728-4e81-8cdf-55cfe1e59391-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5e37df03-3728-4e81-8cdf-55cfe1e59391" (UID: "5e37df03-3728-4e81-8cdf-55cfe1e59391"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.497143 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e37df03-3728-4e81-8cdf-55cfe1e59391-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.497185 4786 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e37df03-3728-4e81-8cdf-55cfe1e59391-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.956415 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.979528 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5e37df03-3728-4e81-8cdf-55cfe1e59391","Type":"ContainerDied","Data":"cd4c04e8fe3c854e336fb9db33aa69864b618da62d8652f1b0ffe25bc2865689"} Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.979649 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd4c04e8fe3c854e336fb9db33aa69864b618da62d8652f1b0ffe25bc2865689" Jan 27 13:09:32 crc kubenswrapper[4786]: I0127 13:09:32.979896 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 13:09:34 crc kubenswrapper[4786]: I0127 13:09:34.020251 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"593c7cfb-1e0d-403e-aabe-4e39fb0bba88","Type":"ContainerStarted","Data":"bd31e11f822699ea5d8de40d73804f4df531da4fd2fbd3e0439ce0a104ebf8f6"} Jan 27 13:09:34 crc kubenswrapper[4786]: I0127 13:09:34.020701 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"593c7cfb-1e0d-403e-aabe-4e39fb0bba88","Type":"ContainerStarted","Data":"fc7f92a5da9b3ac397147f4720c8627befe14dceb98b788ce1b034944f8872fa"} Jan 27 13:09:34 crc kubenswrapper[4786]: I0127 13:09:34.049429 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.049408197 podStartE2EDuration="3.049408197s" podCreationTimestamp="2026-01-27 13:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:09:34.047104518 +0000 UTC m=+157.257718637" watchObservedRunningTime="2026-01-27 13:09:34.049408197 +0000 UTC m=+157.260022316" Jan 27 13:09:35 crc kubenswrapper[4786]: I0127 13:09:35.041982 4786 generic.go:334] "Generic (PLEG): container finished" podID="593c7cfb-1e0d-403e-aabe-4e39fb0bba88" containerID="bd31e11f822699ea5d8de40d73804f4df531da4fd2fbd3e0439ce0a104ebf8f6" exitCode=0 Jan 27 13:09:35 crc kubenswrapper[4786]: I0127 13:09:35.042005 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"593c7cfb-1e0d-403e-aabe-4e39fb0bba88","Type":"ContainerDied","Data":"bd31e11f822699ea5d8de40d73804f4df531da4fd2fbd3e0439ce0a104ebf8f6"} Jan 27 13:09:38 crc kubenswrapper[4786]: I0127 13:09:38.288403 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-8nm76 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 27 13:09:38 crc kubenswrapper[4786]: I0127 13:09:38.288861 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8nm76" podUID="cbf5f627-0aa5-4a32-840c-f76373e2150e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 27 13:09:38 crc kubenswrapper[4786]: I0127 13:09:38.288768 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-8nm76 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 27 13:09:38 crc kubenswrapper[4786]: I0127 13:09:38.288972 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8nm76" podUID="cbf5f627-0aa5-4a32-840c-f76373e2150e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 27 13:09:38 crc kubenswrapper[4786]: I0127 13:09:38.577542 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:38 crc kubenswrapper[4786]: I0127 13:09:38.581939 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:09:39 crc kubenswrapper[4786]: I0127 13:09:39.532926 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:09:39 crc kubenswrapper[4786]: I0127 13:09:39.534367 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:09:40 crc kubenswrapper[4786]: I0127 13:09:40.249504 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs\") pod \"network-metrics-daemon-8jf77\" (UID: \"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\") " pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:09:40 crc kubenswrapper[4786]: I0127 13:09:40.259067 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496-metrics-certs\") pod \"network-metrics-daemon-8jf77\" (UID: \"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496\") " pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:09:40 crc kubenswrapper[4786]: I0127 13:09:40.385378 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8jf77" Jan 27 13:09:45 crc kubenswrapper[4786]: I0127 13:09:45.118528 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"593c7cfb-1e0d-403e-aabe-4e39fb0bba88","Type":"ContainerDied","Data":"fc7f92a5da9b3ac397147f4720c8627befe14dceb98b788ce1b034944f8872fa"} Jan 27 13:09:45 crc kubenswrapper[4786]: I0127 13:09:45.119507 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc7f92a5da9b3ac397147f4720c8627befe14dceb98b788ce1b034944f8872fa" Jan 27 13:09:45 crc kubenswrapper[4786]: I0127 13:09:45.128493 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:09:45 crc kubenswrapper[4786]: I0127 13:09:45.233691 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/593c7cfb-1e0d-403e-aabe-4e39fb0bba88-kube-api-access\") pod \"593c7cfb-1e0d-403e-aabe-4e39fb0bba88\" (UID: \"593c7cfb-1e0d-403e-aabe-4e39fb0bba88\") " Jan 27 13:09:45 crc kubenswrapper[4786]: I0127 13:09:45.234774 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/593c7cfb-1e0d-403e-aabe-4e39fb0bba88-kubelet-dir\") pod \"593c7cfb-1e0d-403e-aabe-4e39fb0bba88\" (UID: \"593c7cfb-1e0d-403e-aabe-4e39fb0bba88\") " Jan 27 13:09:45 crc kubenswrapper[4786]: I0127 13:09:45.234868 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/593c7cfb-1e0d-403e-aabe-4e39fb0bba88-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "593c7cfb-1e0d-403e-aabe-4e39fb0bba88" (UID: "593c7cfb-1e0d-403e-aabe-4e39fb0bba88"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:09:45 crc kubenswrapper[4786]: I0127 13:09:45.235085 4786 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/593c7cfb-1e0d-403e-aabe-4e39fb0bba88-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:09:45 crc kubenswrapper[4786]: I0127 13:09:45.239668 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593c7cfb-1e0d-403e-aabe-4e39fb0bba88-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "593c7cfb-1e0d-403e-aabe-4e39fb0bba88" (UID: "593c7cfb-1e0d-403e-aabe-4e39fb0bba88"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:09:45 crc kubenswrapper[4786]: I0127 13:09:45.336227 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/593c7cfb-1e0d-403e-aabe-4e39fb0bba88-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 13:09:46 crc kubenswrapper[4786]: I0127 13:09:46.123111 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 13:09:46 crc kubenswrapper[4786]: I0127 13:09:46.960859 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:09:48 crc kubenswrapper[4786]: I0127 13:09:48.302771 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-8nm76" Jan 27 13:09:59 crc kubenswrapper[4786]: I0127 13:09:59.331032 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2j2wh" Jan 27 13:10:00 crc kubenswrapper[4786]: E0127 13:10:00.639678 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 13:10:00 crc kubenswrapper[4786]: E0127 13:10:00.640021 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrz4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2js8d_openshift-marketplace(9332175b-3747-40d6-892d-1c126a05b0c2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 13:10:00 crc kubenswrapper[4786]: E0127 13:10:00.641207 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2js8d" podUID="9332175b-3747-40d6-892d-1c126a05b0c2" Jan 27 13:10:01 crc kubenswrapper[4786]: E0127 13:10:01.659173 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2js8d" podUID="9332175b-3747-40d6-892d-1c126a05b0c2" Jan 27 13:10:01 crc kubenswrapper[4786]: E0127 13:10:01.715449 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 13:10:01 crc kubenswrapper[4786]: E0127 13:10:01.715665 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2b9gh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-cvqtc_openshift-marketplace(90f2a321-e151-44e9-a16b-4c9e7b883b64): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 13:10:01 crc kubenswrapper[4786]: E0127 13:10:01.717800 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-cvqtc" podUID="90f2a321-e151-44e9-a16b-4c9e7b883b64" Jan 27 13:10:02 crc kubenswrapper[4786]: E0127 13:10:02.879859 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-cvqtc" podUID="90f2a321-e151-44e9-a16b-4c9e7b883b64" Jan 27 13:10:02 crc kubenswrapper[4786]: E0127 13:10:02.965196 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 13:10:02 crc kubenswrapper[4786]: E0127 13:10:02.965365 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-svqb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-z4pv9_openshift-marketplace(5c25e7dc-42ad-4c09-8187-354fd9f6d954): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 13:10:02 crc kubenswrapper[4786]: E0127 13:10:02.966548 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-z4pv9" podUID="5c25e7dc-42ad-4c09-8187-354fd9f6d954" Jan 27 13:10:02 crc kubenswrapper[4786]: E0127 13:10:02.989763 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 13:10:02 crc kubenswrapper[4786]: E0127 13:10:02.989953 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pc465,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-9lbn2_openshift-marketplace(dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 13:10:02 crc kubenswrapper[4786]: E0127 13:10:02.991165 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-9lbn2" podUID="dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" Jan 27 13:10:04 crc kubenswrapper[4786]: I0127 13:10:04.756238 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 13:10:05 crc kubenswrapper[4786]: E0127 13:10:05.949343 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-9lbn2" podUID="dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" Jan 27 13:10:05 crc kubenswrapper[4786]: E0127 13:10:05.949343 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z4pv9" podUID="5c25e7dc-42ad-4c09-8187-354fd9f6d954" Jan 27 13:10:06 crc kubenswrapper[4786]: E0127 13:10:06.044367 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 13:10:06 crc kubenswrapper[4786]: E0127 13:10:06.044978 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5b6xt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ccwr8_openshift-marketplace(acf3913c-c965-4da8-91b3-6d4f2fa40f6b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 13:10:06 crc kubenswrapper[4786]: E0127 13:10:06.046371 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ccwr8" podUID="acf3913c-c965-4da8-91b3-6d4f2fa40f6b" Jan 27 13:10:06 crc kubenswrapper[4786]: E0127 13:10:06.064043 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 13:10:06 crc kubenswrapper[4786]: E0127 13:10:06.064230 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9tq9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-sk9mj_openshift-marketplace(f61b3f86-bdc9-44a2-a5a8-d9895393ddaa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 13:10:06 crc kubenswrapper[4786]: E0127 13:10:06.065621 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-sk9mj" podUID="f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" Jan 27 13:10:06 crc kubenswrapper[4786]: E0127 13:10:06.081903 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 13:10:06 crc kubenswrapper[4786]: E0127 13:10:06.082065 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrcx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f44tb_openshift-marketplace(35ceca1f-028f-4e16-8c4c-1fe9094598c8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 13:10:06 crc kubenswrapper[4786]: E0127 13:10:06.083386 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-f44tb" podUID="35ceca1f-028f-4e16-8c4c-1fe9094598c8" Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.221588 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57tgc" event={"ID":"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0","Type":"ContainerStarted","Data":"e6f7116a0892304c97f5e18c6cb7bb84c09299797c127fc865184c6aee2874cc"} Jan 27 13:10:06 crc kubenswrapper[4786]: E0127 13:10:06.223444 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f44tb" podUID="35ceca1f-028f-4e16-8c4c-1fe9094598c8" Jan 27 13:10:06 crc kubenswrapper[4786]: E0127 13:10:06.223971 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-sk9mj" podUID="f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" Jan 27 13:10:06 crc kubenswrapper[4786]: E0127 13:10:06.225064 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ccwr8" podUID="acf3913c-c965-4da8-91b3-6d4f2fa40f6b" Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.373191 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8jf77"] Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.402021 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 13:10:06 crc kubenswrapper[4786]: E0127 13:10:06.402308 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e37df03-3728-4e81-8cdf-55cfe1e59391" containerName="pruner" Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.402327 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e37df03-3728-4e81-8cdf-55cfe1e59391" containerName="pruner" Jan 27 13:10:06 crc kubenswrapper[4786]: E0127 13:10:06.402342 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593c7cfb-1e0d-403e-aabe-4e39fb0bba88" containerName="pruner" Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.402351 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="593c7cfb-1e0d-403e-aabe-4e39fb0bba88" containerName="pruner" Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.402477 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="593c7cfb-1e0d-403e-aabe-4e39fb0bba88" containerName="pruner" Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.402493 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e37df03-3728-4e81-8cdf-55cfe1e59391" containerName="pruner" Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.402988 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.405274 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.405939 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.413564 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.509043 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4dfa22da-3b7a-43f8-9b21-e40032055925-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4dfa22da-3b7a-43f8-9b21-e40032055925\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.509349 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4dfa22da-3b7a-43f8-9b21-e40032055925-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4dfa22da-3b7a-43f8-9b21-e40032055925\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.611064 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4dfa22da-3b7a-43f8-9b21-e40032055925-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4dfa22da-3b7a-43f8-9b21-e40032055925\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.611115 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4dfa22da-3b7a-43f8-9b21-e40032055925-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4dfa22da-3b7a-43f8-9b21-e40032055925\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.611190 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4dfa22da-3b7a-43f8-9b21-e40032055925-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4dfa22da-3b7a-43f8-9b21-e40032055925\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.631959 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4dfa22da-3b7a-43f8-9b21-e40032055925-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4dfa22da-3b7a-43f8-9b21-e40032055925\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:10:06 crc kubenswrapper[4786]: I0127 13:10:06.786624 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:10:07 crc kubenswrapper[4786]: I0127 13:10:07.161738 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 13:10:07 crc kubenswrapper[4786]: I0127 13:10:07.227860 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4dfa22da-3b7a-43f8-9b21-e40032055925","Type":"ContainerStarted","Data":"7d885fc6daffb3061975396c2392631cb90b20ecdbb888046ddae5d14ef9ba34"} Jan 27 13:10:07 crc kubenswrapper[4786]: I0127 13:10:07.229701 4786 generic.go:334] "Generic (PLEG): container finished" podID="29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0" containerID="e6f7116a0892304c97f5e18c6cb7bb84c09299797c127fc865184c6aee2874cc" exitCode=0 Jan 27 13:10:07 crc kubenswrapper[4786]: I0127 13:10:07.229751 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57tgc" event={"ID":"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0","Type":"ContainerDied","Data":"e6f7116a0892304c97f5e18c6cb7bb84c09299797c127fc865184c6aee2874cc"} Jan 27 13:10:07 crc kubenswrapper[4786]: I0127 13:10:07.232916 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8jf77" event={"ID":"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496","Type":"ContainerStarted","Data":"5d2229a59400526cbc60225f410b94b3b2daafbce35966453a498b0e4effd48a"} Jan 27 13:10:07 crc kubenswrapper[4786]: I0127 13:10:07.232944 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8jf77" event={"ID":"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496","Type":"ContainerStarted","Data":"ea8be94ff26c317573ae2eb61400c0281168d28343300d7c02ae89d1c2ba4b59"} Jan 27 13:10:07 crc kubenswrapper[4786]: I0127 13:10:07.232955 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8jf77" event={"ID":"a7bfd242-ad3e-47a1-9e0b-a5e2cfd82496","Type":"ContainerStarted","Data":"0c8453cff3fad093fe34e10229a1cf61fe0013463c358ca21285914e07de2cfa"} Jan 27 13:10:07 crc kubenswrapper[4786]: I0127 13:10:07.260006 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-8jf77" podStartSLOduration=169.25998984 podStartE2EDuration="2m49.25998984s" podCreationTimestamp="2026-01-27 13:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:10:07.25935438 +0000 UTC m=+190.469968519" watchObservedRunningTime="2026-01-27 13:10:07.25998984 +0000 UTC m=+190.470603959" Jan 27 13:10:08 crc kubenswrapper[4786]: I0127 13:10:08.239725 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57tgc" event={"ID":"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0","Type":"ContainerStarted","Data":"ed1104acdc3c4e2a205d16dd17b14101b236b609ffb66d1ef78b66d5dd919249"} Jan 27 13:10:08 crc kubenswrapper[4786]: I0127 13:10:08.241264 4786 generic.go:334] "Generic (PLEG): container finished" podID="4dfa22da-3b7a-43f8-9b21-e40032055925" containerID="a9b2d9f4527cea6ce425eb92e4eac70c66deb7f239524abc22a13be7b07fc630" exitCode=0 Jan 27 13:10:08 crc kubenswrapper[4786]: I0127 13:10:08.241387 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4dfa22da-3b7a-43f8-9b21-e40032055925","Type":"ContainerDied","Data":"a9b2d9f4527cea6ce425eb92e4eac70c66deb7f239524abc22a13be7b07fc630"} Jan 27 13:10:08 crc kubenswrapper[4786]: I0127 13:10:08.257679 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-57tgc" podStartSLOduration=2.113547486 podStartE2EDuration="42.257666911s" podCreationTimestamp="2026-01-27 13:09:26 +0000 UTC" firstStartedPulling="2026-01-27 13:09:27.613956568 +0000 UTC m=+150.824570687" lastFinishedPulling="2026-01-27 13:10:07.758075993 +0000 UTC m=+190.968690112" observedRunningTime="2026-01-27 13:10:08.255452444 +0000 UTC m=+191.466066563" watchObservedRunningTime="2026-01-27 13:10:08.257666911 +0000 UTC m=+191.468281030" Jan 27 13:10:09 crc kubenswrapper[4786]: I0127 13:10:09.533439 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:10:09 crc kubenswrapper[4786]: I0127 13:10:09.533803 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:10:09 crc kubenswrapper[4786]: I0127 13:10:09.560138 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:10:09 crc kubenswrapper[4786]: I0127 13:10:09.756571 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4dfa22da-3b7a-43f8-9b21-e40032055925-kubelet-dir\") pod \"4dfa22da-3b7a-43f8-9b21-e40032055925\" (UID: \"4dfa22da-3b7a-43f8-9b21-e40032055925\") " Jan 27 13:10:09 crc kubenswrapper[4786]: I0127 13:10:09.756727 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dfa22da-3b7a-43f8-9b21-e40032055925-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4dfa22da-3b7a-43f8-9b21-e40032055925" (UID: "4dfa22da-3b7a-43f8-9b21-e40032055925"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:10:09 crc kubenswrapper[4786]: I0127 13:10:09.756779 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4dfa22da-3b7a-43f8-9b21-e40032055925-kube-api-access\") pod \"4dfa22da-3b7a-43f8-9b21-e40032055925\" (UID: \"4dfa22da-3b7a-43f8-9b21-e40032055925\") " Jan 27 13:10:09 crc kubenswrapper[4786]: I0127 13:10:09.757166 4786 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4dfa22da-3b7a-43f8-9b21-e40032055925-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:09 crc kubenswrapper[4786]: I0127 13:10:09.763443 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dfa22da-3b7a-43f8-9b21-e40032055925-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4dfa22da-3b7a-43f8-9b21-e40032055925" (UID: "4dfa22da-3b7a-43f8-9b21-e40032055925"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:10:09 crc kubenswrapper[4786]: I0127 13:10:09.859267 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4dfa22da-3b7a-43f8-9b21-e40032055925-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:10 crc kubenswrapper[4786]: I0127 13:10:10.255711 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4dfa22da-3b7a-43f8-9b21-e40032055925","Type":"ContainerDied","Data":"7d885fc6daffb3061975396c2392631cb90b20ecdbb888046ddae5d14ef9ba34"} Jan 27 13:10:10 crc kubenswrapper[4786]: I0127 13:10:10.255763 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d885fc6daffb3061975396c2392631cb90b20ecdbb888046ddae5d14ef9ba34" Jan 27 13:10:10 crc kubenswrapper[4786]: I0127 13:10:10.255763 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 13:10:12 crc kubenswrapper[4786]: I0127 13:10:12.799622 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 13:10:12 crc kubenswrapper[4786]: E0127 13:10:12.799883 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dfa22da-3b7a-43f8-9b21-e40032055925" containerName="pruner" Jan 27 13:10:12 crc kubenswrapper[4786]: I0127 13:10:12.799894 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dfa22da-3b7a-43f8-9b21-e40032055925" containerName="pruner" Jan 27 13:10:12 crc kubenswrapper[4786]: I0127 13:10:12.799993 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dfa22da-3b7a-43f8-9b21-e40032055925" containerName="pruner" Jan 27 13:10:12 crc kubenswrapper[4786]: I0127 13:10:12.800382 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:10:12 crc kubenswrapper[4786]: I0127 13:10:12.803928 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 13:10:12 crc kubenswrapper[4786]: I0127 13:10:12.803994 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 13:10:12 crc kubenswrapper[4786]: I0127 13:10:12.813897 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 13:10:12 crc kubenswrapper[4786]: I0127 13:10:12.899966 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6716fde7-7c58-4306-99f9-67baca4a9238-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6716fde7-7c58-4306-99f9-67baca4a9238\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:10:12 crc kubenswrapper[4786]: I0127 13:10:12.900042 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6716fde7-7c58-4306-99f9-67baca4a9238-kube-api-access\") pod \"installer-9-crc\" (UID: \"6716fde7-7c58-4306-99f9-67baca4a9238\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:10:12 crc kubenswrapper[4786]: I0127 13:10:12.900185 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6716fde7-7c58-4306-99f9-67baca4a9238-var-lock\") pod \"installer-9-crc\" (UID: \"6716fde7-7c58-4306-99f9-67baca4a9238\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:10:13 crc kubenswrapper[4786]: I0127 13:10:13.001421 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6716fde7-7c58-4306-99f9-67baca4a9238-var-lock\") pod \"installer-9-crc\" (UID: \"6716fde7-7c58-4306-99f9-67baca4a9238\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:10:13 crc kubenswrapper[4786]: I0127 13:10:13.001472 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6716fde7-7c58-4306-99f9-67baca4a9238-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6716fde7-7c58-4306-99f9-67baca4a9238\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:10:13 crc kubenswrapper[4786]: I0127 13:10:13.001499 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6716fde7-7c58-4306-99f9-67baca4a9238-kube-api-access\") pod \"installer-9-crc\" (UID: \"6716fde7-7c58-4306-99f9-67baca4a9238\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:10:13 crc kubenswrapper[4786]: I0127 13:10:13.001569 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6716fde7-7c58-4306-99f9-67baca4a9238-var-lock\") pod \"installer-9-crc\" (UID: \"6716fde7-7c58-4306-99f9-67baca4a9238\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:10:13 crc kubenswrapper[4786]: I0127 13:10:13.001587 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6716fde7-7c58-4306-99f9-67baca4a9238-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6716fde7-7c58-4306-99f9-67baca4a9238\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:10:13 crc kubenswrapper[4786]: I0127 13:10:13.019268 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6716fde7-7c58-4306-99f9-67baca4a9238-kube-api-access\") pod \"installer-9-crc\" (UID: \"6716fde7-7c58-4306-99f9-67baca4a9238\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:10:13 crc kubenswrapper[4786]: I0127 13:10:13.123706 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:10:13 crc kubenswrapper[4786]: I0127 13:10:13.499429 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 13:10:13 crc kubenswrapper[4786]: W0127 13:10:13.505782 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6716fde7_7c58_4306_99f9_67baca4a9238.slice/crio-ee5bf87a7955ac18f88930af255ffcc0b271dcb8e10a1a1ec4c61a4410ea3979 WatchSource:0}: Error finding container ee5bf87a7955ac18f88930af255ffcc0b271dcb8e10a1a1ec4c61a4410ea3979: Status 404 returned error can't find the container with id ee5bf87a7955ac18f88930af255ffcc0b271dcb8e10a1a1ec4c61a4410ea3979 Jan 27 13:10:14 crc kubenswrapper[4786]: I0127 13:10:14.284370 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6716fde7-7c58-4306-99f9-67baca4a9238","Type":"ContainerStarted","Data":"b03d0f42cc35385326109b6cdbcc133f1c3b29562f43505219cacde99d65f891"} Jan 27 13:10:14 crc kubenswrapper[4786]: I0127 13:10:14.284740 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6716fde7-7c58-4306-99f9-67baca4a9238","Type":"ContainerStarted","Data":"ee5bf87a7955ac18f88930af255ffcc0b271dcb8e10a1a1ec4c61a4410ea3979"} Jan 27 13:10:14 crc kubenswrapper[4786]: I0127 13:10:14.298475 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.29845504 podStartE2EDuration="2.29845504s" podCreationTimestamp="2026-01-27 13:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:10:14.295227143 +0000 UTC m=+197.505841262" watchObservedRunningTime="2026-01-27 13:10:14.29845504 +0000 UTC m=+197.509069149" Jan 27 13:10:16 crc kubenswrapper[4786]: I0127 13:10:16.300843 4786 generic.go:334] "Generic (PLEG): container finished" podID="9332175b-3747-40d6-892d-1c126a05b0c2" containerID="fadecf1a5188f682562d22905957a6c92bbd8ae7f9b01574da3878481103870d" exitCode=0 Jan 27 13:10:16 crc kubenswrapper[4786]: I0127 13:10:16.300916 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2js8d" event={"ID":"9332175b-3747-40d6-892d-1c126a05b0c2","Type":"ContainerDied","Data":"fadecf1a5188f682562d22905957a6c92bbd8ae7f9b01574da3878481103870d"} Jan 27 13:10:16 crc kubenswrapper[4786]: I0127 13:10:16.415264 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:10:16 crc kubenswrapper[4786]: I0127 13:10:16.415329 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:10:16 crc kubenswrapper[4786]: I0127 13:10:16.658245 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:10:17 crc kubenswrapper[4786]: I0127 13:10:17.308831 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2js8d" event={"ID":"9332175b-3747-40d6-892d-1c126a05b0c2","Type":"ContainerStarted","Data":"7fd1bc78e82ecbc935e08459b986530b5a3b8b93bc5557e0bc9b754a688602f6"} Jan 27 13:10:17 crc kubenswrapper[4786]: I0127 13:10:17.331903 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2js8d" podStartSLOduration=3.17058884 podStartE2EDuration="52.331885124s" podCreationTimestamp="2026-01-27 13:09:25 +0000 UTC" firstStartedPulling="2026-01-27 13:09:27.625773842 +0000 UTC m=+150.836387961" lastFinishedPulling="2026-01-27 13:10:16.787070126 +0000 UTC m=+199.997684245" observedRunningTime="2026-01-27 13:10:17.328131552 +0000 UTC m=+200.538745691" watchObservedRunningTime="2026-01-27 13:10:17.331885124 +0000 UTC m=+200.542499243" Jan 27 13:10:17 crc kubenswrapper[4786]: I0127 13:10:17.349108 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:10:18 crc kubenswrapper[4786]: I0127 13:10:18.315971 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sk9mj" event={"ID":"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa","Type":"ContainerStarted","Data":"e5ebd0e5ac5b052dd5b59ff66941652aaa109092a71debbf25c6ed38d12e1a6d"} Jan 27 13:10:18 crc kubenswrapper[4786]: I0127 13:10:18.318198 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cvqtc" event={"ID":"90f2a321-e151-44e9-a16b-4c9e7b883b64","Type":"ContainerStarted","Data":"4f257256ae04a7797c149d09c4805d09b4119bf5f1eb16bf16bd2569b3820bc6"} Jan 27 13:10:19 crc kubenswrapper[4786]: I0127 13:10:19.326730 4786 generic.go:334] "Generic (PLEG): container finished" podID="f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" containerID="e5ebd0e5ac5b052dd5b59ff66941652aaa109092a71debbf25c6ed38d12e1a6d" exitCode=0 Jan 27 13:10:19 crc kubenswrapper[4786]: I0127 13:10:19.326812 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sk9mj" event={"ID":"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa","Type":"ContainerDied","Data":"e5ebd0e5ac5b052dd5b59ff66941652aaa109092a71debbf25c6ed38d12e1a6d"} Jan 27 13:10:19 crc kubenswrapper[4786]: I0127 13:10:19.329226 4786 generic.go:334] "Generic (PLEG): container finished" podID="90f2a321-e151-44e9-a16b-4c9e7b883b64" containerID="4f257256ae04a7797c149d09c4805d09b4119bf5f1eb16bf16bd2569b3820bc6" exitCode=0 Jan 27 13:10:19 crc kubenswrapper[4786]: I0127 13:10:19.329273 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cvqtc" event={"ID":"90f2a321-e151-44e9-a16b-4c9e7b883b64","Type":"ContainerDied","Data":"4f257256ae04a7797c149d09c4805d09b4119bf5f1eb16bf16bd2569b3820bc6"} Jan 27 13:10:19 crc kubenswrapper[4786]: I0127 13:10:19.527660 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-57tgc"] Jan 27 13:10:19 crc kubenswrapper[4786]: I0127 13:10:19.527917 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-57tgc" podUID="29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0" containerName="registry-server" containerID="cri-o://ed1104acdc3c4e2a205d16dd17b14101b236b609ffb66d1ef78b66d5dd919249" gracePeriod=2 Jan 27 13:10:19 crc kubenswrapper[4786]: I0127 13:10:19.963526 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.112502 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-catalog-content\") pod \"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0\" (UID: \"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0\") " Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.112909 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dxcs\" (UniqueName: \"kubernetes.io/projected/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-kube-api-access-8dxcs\") pod \"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0\" (UID: \"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0\") " Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.112970 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-utilities\") pod \"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0\" (UID: \"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0\") " Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.113571 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-utilities" (OuterVolumeSpecName: "utilities") pod "29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0" (UID: "29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.121911 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-kube-api-access-8dxcs" (OuterVolumeSpecName: "kube-api-access-8dxcs") pod "29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0" (UID: "29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0"). InnerVolumeSpecName "kube-api-access-8dxcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.136056 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0" (UID: "29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.215065 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.215130 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dxcs\" (UniqueName: \"kubernetes.io/projected/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-kube-api-access-8dxcs\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.215148 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.337132 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sk9mj" event={"ID":"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa","Type":"ContainerStarted","Data":"eebd3bb3424aaf829248a618dbcb886dd9549cb2abbc375300bfefa682267c6c"} Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.340141 4786 generic.go:334] "Generic (PLEG): container finished" podID="29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0" containerID="ed1104acdc3c4e2a205d16dd17b14101b236b609ffb66d1ef78b66d5dd919249" exitCode=0 Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.340190 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57tgc" event={"ID":"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0","Type":"ContainerDied","Data":"ed1104acdc3c4e2a205d16dd17b14101b236b609ffb66d1ef78b66d5dd919249"} Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.340208 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57tgc" event={"ID":"29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0","Type":"ContainerDied","Data":"99e1286ec251848f73824c5159681d5feb2465f651ffdfa4c8ae77d0a9513f5e"} Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.340222 4786 scope.go:117] "RemoveContainer" containerID="ed1104acdc3c4e2a205d16dd17b14101b236b609ffb66d1ef78b66d5dd919249" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.340311 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-57tgc" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.347758 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cvqtc" event={"ID":"90f2a321-e151-44e9-a16b-4c9e7b883b64","Type":"ContainerStarted","Data":"3d360fd5bb0bf27c933c9e33dfc670542ad989d68c80f66ddae1151774b2aef6"} Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.355546 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sk9mj" podStartSLOduration=4.080387459 podStartE2EDuration="57.355528755s" podCreationTimestamp="2026-01-27 13:09:23 +0000 UTC" firstStartedPulling="2026-01-27 13:09:26.548895333 +0000 UTC m=+149.759509452" lastFinishedPulling="2026-01-27 13:10:19.824036629 +0000 UTC m=+203.034650748" observedRunningTime="2026-01-27 13:10:20.351154076 +0000 UTC m=+203.561768225" watchObservedRunningTime="2026-01-27 13:10:20.355528755 +0000 UTC m=+203.566142874" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.360639 4786 scope.go:117] "RemoveContainer" containerID="e6f7116a0892304c97f5e18c6cb7bb84c09299797c127fc865184c6aee2874cc" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.380538 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cvqtc" podStartSLOduration=4.100649016 podStartE2EDuration="57.380518223s" podCreationTimestamp="2026-01-27 13:09:23 +0000 UTC" firstStartedPulling="2026-01-27 13:09:26.489510848 +0000 UTC m=+149.700124967" lastFinishedPulling="2026-01-27 13:10:19.769380055 +0000 UTC m=+202.979994174" observedRunningTime="2026-01-27 13:10:20.376833523 +0000 UTC m=+203.587447642" watchObservedRunningTime="2026-01-27 13:10:20.380518223 +0000 UTC m=+203.591132342" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.400307 4786 scope.go:117] "RemoveContainer" containerID="cb55c564a54399cde840731a7273b833678db36cf42e9cca3a4e22c67e5106f7" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.414641 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-57tgc"] Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.425111 4786 scope.go:117] "RemoveContainer" containerID="ed1104acdc3c4e2a205d16dd17b14101b236b609ffb66d1ef78b66d5dd919249" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.425496 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-57tgc"] Jan 27 13:10:20 crc kubenswrapper[4786]: E0127 13:10:20.425908 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed1104acdc3c4e2a205d16dd17b14101b236b609ffb66d1ef78b66d5dd919249\": container with ID starting with ed1104acdc3c4e2a205d16dd17b14101b236b609ffb66d1ef78b66d5dd919249 not found: ID does not exist" containerID="ed1104acdc3c4e2a205d16dd17b14101b236b609ffb66d1ef78b66d5dd919249" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.425943 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed1104acdc3c4e2a205d16dd17b14101b236b609ffb66d1ef78b66d5dd919249"} err="failed to get container status \"ed1104acdc3c4e2a205d16dd17b14101b236b609ffb66d1ef78b66d5dd919249\": rpc error: code = NotFound desc = could not find container \"ed1104acdc3c4e2a205d16dd17b14101b236b609ffb66d1ef78b66d5dd919249\": container with ID starting with ed1104acdc3c4e2a205d16dd17b14101b236b609ffb66d1ef78b66d5dd919249 not found: ID does not exist" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.425985 4786 scope.go:117] "RemoveContainer" containerID="e6f7116a0892304c97f5e18c6cb7bb84c09299797c127fc865184c6aee2874cc" Jan 27 13:10:20 crc kubenswrapper[4786]: E0127 13:10:20.427451 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6f7116a0892304c97f5e18c6cb7bb84c09299797c127fc865184c6aee2874cc\": container with ID starting with e6f7116a0892304c97f5e18c6cb7bb84c09299797c127fc865184c6aee2874cc not found: ID does not exist" containerID="e6f7116a0892304c97f5e18c6cb7bb84c09299797c127fc865184c6aee2874cc" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.427476 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f7116a0892304c97f5e18c6cb7bb84c09299797c127fc865184c6aee2874cc"} err="failed to get container status \"e6f7116a0892304c97f5e18c6cb7bb84c09299797c127fc865184c6aee2874cc\": rpc error: code = NotFound desc = could not find container \"e6f7116a0892304c97f5e18c6cb7bb84c09299797c127fc865184c6aee2874cc\": container with ID starting with e6f7116a0892304c97f5e18c6cb7bb84c09299797c127fc865184c6aee2874cc not found: ID does not exist" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.427490 4786 scope.go:117] "RemoveContainer" containerID="cb55c564a54399cde840731a7273b833678db36cf42e9cca3a4e22c67e5106f7" Jan 27 13:10:20 crc kubenswrapper[4786]: E0127 13:10:20.432729 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb55c564a54399cde840731a7273b833678db36cf42e9cca3a4e22c67e5106f7\": container with ID starting with cb55c564a54399cde840731a7273b833678db36cf42e9cca3a4e22c67e5106f7 not found: ID does not exist" containerID="cb55c564a54399cde840731a7273b833678db36cf42e9cca3a4e22c67e5106f7" Jan 27 13:10:20 crc kubenswrapper[4786]: I0127 13:10:20.432770 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb55c564a54399cde840731a7273b833678db36cf42e9cca3a4e22c67e5106f7"} err="failed to get container status \"cb55c564a54399cde840731a7273b833678db36cf42e9cca3a4e22c67e5106f7\": rpc error: code = NotFound desc = could not find container \"cb55c564a54399cde840731a7273b833678db36cf42e9cca3a4e22c67e5106f7\": container with ID starting with cb55c564a54399cde840731a7273b833678db36cf42e9cca3a4e22c67e5106f7 not found: ID does not exist" Jan 27 13:10:21 crc kubenswrapper[4786]: I0127 13:10:21.354139 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4pv9" event={"ID":"5c25e7dc-42ad-4c09-8187-354fd9f6d954","Type":"ContainerStarted","Data":"67d46a8243224ede482e0a761cad93e9e245955b37801d5ebc0445fe09658fdc"} Jan 27 13:10:21 crc kubenswrapper[4786]: I0127 13:10:21.355783 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lbn2" event={"ID":"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a","Type":"ContainerStarted","Data":"44b0930588b4c5f24efdef597b715b9111272f27d04936b8baebf07702619f6a"} Jan 27 13:10:21 crc kubenswrapper[4786]: I0127 13:10:21.357483 4786 generic.go:334] "Generic (PLEG): container finished" podID="acf3913c-c965-4da8-91b3-6d4f2fa40f6b" containerID="670e242c65d60996f1b19cc269a12fafbee9f1dafadf6e71cc5bb5afc2fadf20" exitCode=0 Jan 27 13:10:21 crc kubenswrapper[4786]: I0127 13:10:21.357520 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccwr8" event={"ID":"acf3913c-c965-4da8-91b3-6d4f2fa40f6b","Type":"ContainerDied","Data":"670e242c65d60996f1b19cc269a12fafbee9f1dafadf6e71cc5bb5afc2fadf20"} Jan 27 13:10:21 crc kubenswrapper[4786]: I0127 13:10:21.360934 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f44tb" event={"ID":"35ceca1f-028f-4e16-8c4c-1fe9094598c8","Type":"ContainerStarted","Data":"b25a995651f029347f819bea1a078477c610dc977243d7722d8dda6127b47e6e"} Jan 27 13:10:21 crc kubenswrapper[4786]: I0127 13:10:21.472293 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0" path="/var/lib/kubelet/pods/29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0/volumes" Jan 27 13:10:22 crc kubenswrapper[4786]: I0127 13:10:22.367747 4786 generic.go:334] "Generic (PLEG): container finished" podID="dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" containerID="44b0930588b4c5f24efdef597b715b9111272f27d04936b8baebf07702619f6a" exitCode=0 Jan 27 13:10:22 crc kubenswrapper[4786]: I0127 13:10:22.367830 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lbn2" event={"ID":"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a","Type":"ContainerDied","Data":"44b0930588b4c5f24efdef597b715b9111272f27d04936b8baebf07702619f6a"} Jan 27 13:10:22 crc kubenswrapper[4786]: I0127 13:10:22.370679 4786 generic.go:334] "Generic (PLEG): container finished" podID="35ceca1f-028f-4e16-8c4c-1fe9094598c8" containerID="b25a995651f029347f819bea1a078477c610dc977243d7722d8dda6127b47e6e" exitCode=0 Jan 27 13:10:22 crc kubenswrapper[4786]: I0127 13:10:22.370765 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f44tb" event={"ID":"35ceca1f-028f-4e16-8c4c-1fe9094598c8","Type":"ContainerDied","Data":"b25a995651f029347f819bea1a078477c610dc977243d7722d8dda6127b47e6e"} Jan 27 13:10:22 crc kubenswrapper[4786]: I0127 13:10:22.373824 4786 generic.go:334] "Generic (PLEG): container finished" podID="5c25e7dc-42ad-4c09-8187-354fd9f6d954" containerID="67d46a8243224ede482e0a761cad93e9e245955b37801d5ebc0445fe09658fdc" exitCode=0 Jan 27 13:10:22 crc kubenswrapper[4786]: I0127 13:10:22.373847 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4pv9" event={"ID":"5c25e7dc-42ad-4c09-8187-354fd9f6d954","Type":"ContainerDied","Data":"67d46a8243224ede482e0a761cad93e9e245955b37801d5ebc0445fe09658fdc"} Jan 27 13:10:23 crc kubenswrapper[4786]: I0127 13:10:23.381052 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccwr8" event={"ID":"acf3913c-c965-4da8-91b3-6d4f2fa40f6b","Type":"ContainerStarted","Data":"ec60e8dde322ceba53dc32082a0395cd9f6dd288c52712804a61ec1edcd0eb33"} Jan 27 13:10:23 crc kubenswrapper[4786]: I0127 13:10:23.797411 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:10:23 crc kubenswrapper[4786]: I0127 13:10:23.797528 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:10:23 crc kubenswrapper[4786]: I0127 13:10:23.847993 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:10:24 crc kubenswrapper[4786]: I0127 13:10:24.293429 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:10:24 crc kubenswrapper[4786]: I0127 13:10:24.293663 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:10:24 crc kubenswrapper[4786]: I0127 13:10:24.329928 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:10:24 crc kubenswrapper[4786]: I0127 13:10:24.426385 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:10:24 crc kubenswrapper[4786]: I0127 13:10:24.439085 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:10:24 crc kubenswrapper[4786]: I0127 13:10:24.448391 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ccwr8" podStartSLOduration=3.625124846 podStartE2EDuration="57.448370227s" podCreationTimestamp="2026-01-27 13:09:27 +0000 UTC" firstStartedPulling="2026-01-27 13:09:28.83107345 +0000 UTC m=+152.041687559" lastFinishedPulling="2026-01-27 13:10:22.654318821 +0000 UTC m=+205.864932940" observedRunningTime="2026-01-27 13:10:24.414378774 +0000 UTC m=+207.624992893" watchObservedRunningTime="2026-01-27 13:10:24.448370227 +0000 UTC m=+207.658984356" Jan 27 13:10:25 crc kubenswrapper[4786]: I0127 13:10:25.727182 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cvqtc"] Jan 27 13:10:26 crc kubenswrapper[4786]: I0127 13:10:26.069944 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:10:26 crc kubenswrapper[4786]: I0127 13:10:26.070275 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:10:26 crc kubenswrapper[4786]: I0127 13:10:26.117059 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:10:26 crc kubenswrapper[4786]: I0127 13:10:26.397774 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cvqtc" podUID="90f2a321-e151-44e9-a16b-4c9e7b883b64" containerName="registry-server" containerID="cri-o://3d360fd5bb0bf27c933c9e33dfc670542ad989d68c80f66ddae1151774b2aef6" gracePeriod=2 Jan 27 13:10:26 crc kubenswrapper[4786]: I0127 13:10:26.434359 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:10:27 crc kubenswrapper[4786]: I0127 13:10:27.417106 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:10:27 crc kubenswrapper[4786]: I0127 13:10:27.417184 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.317680 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.411652 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4pv9" event={"ID":"5c25e7dc-42ad-4c09-8187-354fd9f6d954","Type":"ContainerStarted","Data":"efdb11a53d07348d81f78a1bf9ea7f789c8956c3c3d914124e8ebb1385a7b2f0"} Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.413782 4786 generic.go:334] "Generic (PLEG): container finished" podID="90f2a321-e151-44e9-a16b-4c9e7b883b64" containerID="3d360fd5bb0bf27c933c9e33dfc670542ad989d68c80f66ddae1151774b2aef6" exitCode=0 Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.413821 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cvqtc" event={"ID":"90f2a321-e151-44e9-a16b-4c9e7b883b64","Type":"ContainerDied","Data":"3d360fd5bb0bf27c933c9e33dfc670542ad989d68c80f66ddae1151774b2aef6"} Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.413846 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cvqtc" Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.413855 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cvqtc" event={"ID":"90f2a321-e151-44e9-a16b-4c9e7b883b64","Type":"ContainerDied","Data":"1e9fd40d373accaf411287f25bcfa3b0311874c5b0a2e67a308da7dbffc48dde"} Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.413877 4786 scope.go:117] "RemoveContainer" containerID="3d360fd5bb0bf27c933c9e33dfc670542ad989d68c80f66ddae1151774b2aef6" Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.418416 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lbn2" event={"ID":"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a","Type":"ContainerStarted","Data":"ecf32e051bedb06636bd8c57fe31bbf5d3ec99728b0cef8b46980ce1fcd29403"} Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.440242 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9lbn2" podStartSLOduration=5.330211233 podStartE2EDuration="1m5.440221308s" podCreationTimestamp="2026-01-27 13:09:23 +0000 UTC" firstStartedPulling="2026-01-27 13:09:26.501797025 +0000 UTC m=+149.712411144" lastFinishedPulling="2026-01-27 13:10:26.6118071 +0000 UTC m=+209.822421219" observedRunningTime="2026-01-27 13:10:28.439456878 +0000 UTC m=+211.650070997" watchObservedRunningTime="2026-01-27 13:10:28.440221308 +0000 UTC m=+211.650835427" Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.458146 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ccwr8" podUID="acf3913c-c965-4da8-91b3-6d4f2fa40f6b" containerName="registry-server" probeResult="failure" output=< Jan 27 13:10:28 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Jan 27 13:10:28 crc kubenswrapper[4786]: > Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.469594 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b9gh\" (UniqueName: \"kubernetes.io/projected/90f2a321-e151-44e9-a16b-4c9e7b883b64-kube-api-access-2b9gh\") pod \"90f2a321-e151-44e9-a16b-4c9e7b883b64\" (UID: \"90f2a321-e151-44e9-a16b-4c9e7b883b64\") " Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.469796 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f2a321-e151-44e9-a16b-4c9e7b883b64-utilities\") pod \"90f2a321-e151-44e9-a16b-4c9e7b883b64\" (UID: \"90f2a321-e151-44e9-a16b-4c9e7b883b64\") " Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.469895 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f2a321-e151-44e9-a16b-4c9e7b883b64-catalog-content\") pod \"90f2a321-e151-44e9-a16b-4c9e7b883b64\" (UID: \"90f2a321-e151-44e9-a16b-4c9e7b883b64\") " Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.470693 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90f2a321-e151-44e9-a16b-4c9e7b883b64-utilities" (OuterVolumeSpecName: "utilities") pod "90f2a321-e151-44e9-a16b-4c9e7b883b64" (UID: "90f2a321-e151-44e9-a16b-4c9e7b883b64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.476979 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90f2a321-e151-44e9-a16b-4c9e7b883b64-kube-api-access-2b9gh" (OuterVolumeSpecName: "kube-api-access-2b9gh") pod "90f2a321-e151-44e9-a16b-4c9e7b883b64" (UID: "90f2a321-e151-44e9-a16b-4c9e7b883b64"). InnerVolumeSpecName "kube-api-access-2b9gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.528206 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90f2a321-e151-44e9-a16b-4c9e7b883b64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90f2a321-e151-44e9-a16b-4c9e7b883b64" (UID: "90f2a321-e151-44e9-a16b-4c9e7b883b64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.572849 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b9gh\" (UniqueName: \"kubernetes.io/projected/90f2a321-e151-44e9-a16b-4c9e7b883b64-kube-api-access-2b9gh\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.572875 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f2a321-e151-44e9-a16b-4c9e7b883b64-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.572886 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f2a321-e151-44e9-a16b-4c9e7b883b64-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.736415 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cvqtc"] Jan 27 13:10:28 crc kubenswrapper[4786]: I0127 13:10:28.740176 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cvqtc"] Jan 27 13:10:29 crc kubenswrapper[4786]: I0127 13:10:29.433853 4786 scope.go:117] "RemoveContainer" containerID="4f257256ae04a7797c149d09c4805d09b4119bf5f1eb16bf16bd2569b3820bc6" Jan 27 13:10:29 crc kubenswrapper[4786]: I0127 13:10:29.444361 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z4pv9" podStartSLOduration=4.300578593 podStartE2EDuration="1m5.444340214s" podCreationTimestamp="2026-01-27 13:09:24 +0000 UTC" firstStartedPulling="2026-01-27 13:09:26.593401744 +0000 UTC m=+149.804015863" lastFinishedPulling="2026-01-27 13:10:27.737163355 +0000 UTC m=+210.947777484" observedRunningTime="2026-01-27 13:10:29.441196049 +0000 UTC m=+212.651810178" watchObservedRunningTime="2026-01-27 13:10:29.444340214 +0000 UTC m=+212.654954333" Jan 27 13:10:29 crc kubenswrapper[4786]: I0127 13:10:29.471775 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90f2a321-e151-44e9-a16b-4c9e7b883b64" path="/var/lib/kubelet/pods/90f2a321-e151-44e9-a16b-4c9e7b883b64/volumes" Jan 27 13:10:31 crc kubenswrapper[4786]: I0127 13:10:31.489636 4786 scope.go:117] "RemoveContainer" containerID="5eb8622633cebe2da1fcb88a323fbb80c75557d3933239f10766c08ed58587b8" Jan 27 13:10:31 crc kubenswrapper[4786]: I0127 13:10:31.507776 4786 scope.go:117] "RemoveContainer" containerID="3d360fd5bb0bf27c933c9e33dfc670542ad989d68c80f66ddae1151774b2aef6" Jan 27 13:10:31 crc kubenswrapper[4786]: E0127 13:10:31.508851 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d360fd5bb0bf27c933c9e33dfc670542ad989d68c80f66ddae1151774b2aef6\": container with ID starting with 3d360fd5bb0bf27c933c9e33dfc670542ad989d68c80f66ddae1151774b2aef6 not found: ID does not exist" containerID="3d360fd5bb0bf27c933c9e33dfc670542ad989d68c80f66ddae1151774b2aef6" Jan 27 13:10:31 crc kubenswrapper[4786]: I0127 13:10:31.508914 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d360fd5bb0bf27c933c9e33dfc670542ad989d68c80f66ddae1151774b2aef6"} err="failed to get container status \"3d360fd5bb0bf27c933c9e33dfc670542ad989d68c80f66ddae1151774b2aef6\": rpc error: code = NotFound desc = could not find container \"3d360fd5bb0bf27c933c9e33dfc670542ad989d68c80f66ddae1151774b2aef6\": container with ID starting with 3d360fd5bb0bf27c933c9e33dfc670542ad989d68c80f66ddae1151774b2aef6 not found: ID does not exist" Jan 27 13:10:31 crc kubenswrapper[4786]: I0127 13:10:31.508954 4786 scope.go:117] "RemoveContainer" containerID="4f257256ae04a7797c149d09c4805d09b4119bf5f1eb16bf16bd2569b3820bc6" Jan 27 13:10:31 crc kubenswrapper[4786]: E0127 13:10:31.509358 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f257256ae04a7797c149d09c4805d09b4119bf5f1eb16bf16bd2569b3820bc6\": container with ID starting with 4f257256ae04a7797c149d09c4805d09b4119bf5f1eb16bf16bd2569b3820bc6 not found: ID does not exist" containerID="4f257256ae04a7797c149d09c4805d09b4119bf5f1eb16bf16bd2569b3820bc6" Jan 27 13:10:31 crc kubenswrapper[4786]: I0127 13:10:31.509390 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f257256ae04a7797c149d09c4805d09b4119bf5f1eb16bf16bd2569b3820bc6"} err="failed to get container status \"4f257256ae04a7797c149d09c4805d09b4119bf5f1eb16bf16bd2569b3820bc6\": rpc error: code = NotFound desc = could not find container \"4f257256ae04a7797c149d09c4805d09b4119bf5f1eb16bf16bd2569b3820bc6\": container with ID starting with 4f257256ae04a7797c149d09c4805d09b4119bf5f1eb16bf16bd2569b3820bc6 not found: ID does not exist" Jan 27 13:10:31 crc kubenswrapper[4786]: I0127 13:10:31.509408 4786 scope.go:117] "RemoveContainer" containerID="5eb8622633cebe2da1fcb88a323fbb80c75557d3933239f10766c08ed58587b8" Jan 27 13:10:31 crc kubenswrapper[4786]: E0127 13:10:31.509794 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eb8622633cebe2da1fcb88a323fbb80c75557d3933239f10766c08ed58587b8\": container with ID starting with 5eb8622633cebe2da1fcb88a323fbb80c75557d3933239f10766c08ed58587b8 not found: ID does not exist" containerID="5eb8622633cebe2da1fcb88a323fbb80c75557d3933239f10766c08ed58587b8" Jan 27 13:10:31 crc kubenswrapper[4786]: I0127 13:10:31.509834 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb8622633cebe2da1fcb88a323fbb80c75557d3933239f10766c08ed58587b8"} err="failed to get container status \"5eb8622633cebe2da1fcb88a323fbb80c75557d3933239f10766c08ed58587b8\": rpc error: code = NotFound desc = could not find container \"5eb8622633cebe2da1fcb88a323fbb80c75557d3933239f10766c08ed58587b8\": container with ID starting with 5eb8622633cebe2da1fcb88a323fbb80c75557d3933239f10766c08ed58587b8 not found: ID does not exist" Jan 27 13:10:34 crc kubenswrapper[4786]: I0127 13:10:34.036101 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:10:34 crc kubenswrapper[4786]: I0127 13:10:34.036468 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:10:34 crc kubenswrapper[4786]: I0127 13:10:34.083583 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:10:34 crc kubenswrapper[4786]: I0127 13:10:34.402412 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:10:34 crc kubenswrapper[4786]: I0127 13:10:34.402474 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:10:34 crc kubenswrapper[4786]: I0127 13:10:34.509654 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:10:35 crc kubenswrapper[4786]: I0127 13:10:35.447746 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-z4pv9" podUID="5c25e7dc-42ad-4c09-8187-354fd9f6d954" containerName="registry-server" probeResult="failure" output=< Jan 27 13:10:35 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Jan 27 13:10:35 crc kubenswrapper[4786]: > Jan 27 13:10:37 crc kubenswrapper[4786]: I0127 13:10:37.458383 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:10:37 crc kubenswrapper[4786]: I0127 13:10:37.492805 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:10:37 crc kubenswrapper[4786]: I0127 13:10:37.691105 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ccwr8"] Jan 27 13:10:38 crc kubenswrapper[4786]: I0127 13:10:38.485202 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ccwr8" podUID="acf3913c-c965-4da8-91b3-6d4f2fa40f6b" containerName="registry-server" containerID="cri-o://ec60e8dde322ceba53dc32082a0395cd9f6dd288c52712804a61ec1edcd0eb33" gracePeriod=2 Jan 27 13:10:39 crc kubenswrapper[4786]: I0127 13:10:39.533349 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:10:39 crc kubenswrapper[4786]: I0127 13:10:39.533433 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:10:39 crc kubenswrapper[4786]: I0127 13:10:39.533497 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:10:39 crc kubenswrapper[4786]: I0127 13:10:39.534128 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7"} pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:10:39 crc kubenswrapper[4786]: I0127 13:10:39.534545 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" containerID="cri-o://8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7" gracePeriod=600 Jan 27 13:10:40 crc kubenswrapper[4786]: I0127 13:10:40.605044 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nsc4d"] Jan 27 13:10:41 crc kubenswrapper[4786]: I0127 13:10:41.513440 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f44tb" event={"ID":"35ceca1f-028f-4e16-8c4c-1fe9094598c8","Type":"ContainerStarted","Data":"d6198f8824d4ddac19879c411442b0b65e682b3d3b470c70bc136a9e1a7b0a94"} Jan 27 13:10:41 crc kubenswrapper[4786]: I0127 13:10:41.521124 4786 generic.go:334] "Generic (PLEG): container finished" podID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerID="8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7" exitCode=0 Jan 27 13:10:41 crc kubenswrapper[4786]: I0127 13:10:41.521196 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerDied","Data":"8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7"} Jan 27 13:10:41 crc kubenswrapper[4786]: I0127 13:10:41.531910 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccwr8_acf3913c-c965-4da8-91b3-6d4f2fa40f6b/registry-server/0.log" Jan 27 13:10:41 crc kubenswrapper[4786]: I0127 13:10:41.533808 4786 generic.go:334] "Generic (PLEG): container finished" podID="acf3913c-c965-4da8-91b3-6d4f2fa40f6b" containerID="ec60e8dde322ceba53dc32082a0395cd9f6dd288c52712804a61ec1edcd0eb33" exitCode=137 Jan 27 13:10:41 crc kubenswrapper[4786]: I0127 13:10:41.533843 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccwr8" event={"ID":"acf3913c-c965-4da8-91b3-6d4f2fa40f6b","Type":"ContainerDied","Data":"ec60e8dde322ceba53dc32082a0395cd9f6dd288c52712804a61ec1edcd0eb33"} Jan 27 13:10:41 crc kubenswrapper[4786]: I0127 13:10:41.542771 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f44tb" podStartSLOduration=11.683671735 podStartE2EDuration="1m15.54275529s" podCreationTimestamp="2026-01-27 13:09:26 +0000 UTC" firstStartedPulling="2026-01-27 13:09:27.630663468 +0000 UTC m=+150.841277587" lastFinishedPulling="2026-01-27 13:10:31.489747013 +0000 UTC m=+214.700361142" observedRunningTime="2026-01-27 13:10:41.540908541 +0000 UTC m=+224.751522660" watchObservedRunningTime="2026-01-27 13:10:41.54275529 +0000 UTC m=+224.753369409" Jan 27 13:10:41 crc kubenswrapper[4786]: I0127 13:10:41.718408 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccwr8_acf3913c-c965-4da8-91b3-6d4f2fa40f6b/registry-server/0.log" Jan 27 13:10:41 crc kubenswrapper[4786]: I0127 13:10:41.719271 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:10:41 crc kubenswrapper[4786]: I0127 13:10:41.902007 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b6xt\" (UniqueName: \"kubernetes.io/projected/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-kube-api-access-5b6xt\") pod \"acf3913c-c965-4da8-91b3-6d4f2fa40f6b\" (UID: \"acf3913c-c965-4da8-91b3-6d4f2fa40f6b\") " Jan 27 13:10:41 crc kubenswrapper[4786]: I0127 13:10:41.902092 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-catalog-content\") pod \"acf3913c-c965-4da8-91b3-6d4f2fa40f6b\" (UID: \"acf3913c-c965-4da8-91b3-6d4f2fa40f6b\") " Jan 27 13:10:41 crc kubenswrapper[4786]: I0127 13:10:41.902219 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-utilities\") pod \"acf3913c-c965-4da8-91b3-6d4f2fa40f6b\" (UID: \"acf3913c-c965-4da8-91b3-6d4f2fa40f6b\") " Jan 27 13:10:41 crc kubenswrapper[4786]: I0127 13:10:41.903029 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-utilities" (OuterVolumeSpecName: "utilities") pod "acf3913c-c965-4da8-91b3-6d4f2fa40f6b" (UID: "acf3913c-c965-4da8-91b3-6d4f2fa40f6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:10:41 crc kubenswrapper[4786]: I0127 13:10:41.907368 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-kube-api-access-5b6xt" (OuterVolumeSpecName: "kube-api-access-5b6xt") pod "acf3913c-c965-4da8-91b3-6d4f2fa40f6b" (UID: "acf3913c-c965-4da8-91b3-6d4f2fa40f6b"). InnerVolumeSpecName "kube-api-access-5b6xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:10:42 crc kubenswrapper[4786]: I0127 13:10:42.003358 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b6xt\" (UniqueName: \"kubernetes.io/projected/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-kube-api-access-5b6xt\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:42 crc kubenswrapper[4786]: I0127 13:10:42.003409 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:42 crc kubenswrapper[4786]: I0127 13:10:42.182873 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "acf3913c-c965-4da8-91b3-6d4f2fa40f6b" (UID: "acf3913c-c965-4da8-91b3-6d4f2fa40f6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:10:42 crc kubenswrapper[4786]: I0127 13:10:42.206293 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acf3913c-c965-4da8-91b3-6d4f2fa40f6b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:42 crc kubenswrapper[4786]: I0127 13:10:42.540110 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ccwr8_acf3913c-c965-4da8-91b3-6d4f2fa40f6b/registry-server/0.log" Jan 27 13:10:42 crc kubenswrapper[4786]: I0127 13:10:42.541085 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccwr8" event={"ID":"acf3913c-c965-4da8-91b3-6d4f2fa40f6b","Type":"ContainerDied","Data":"fa0187f534f7a1ed3337a60c3d3a74b93b7b62d951d656d1d1e9acccb87bbe42"} Jan 27 13:10:42 crc kubenswrapper[4786]: I0127 13:10:42.541130 4786 scope.go:117] "RemoveContainer" containerID="ec60e8dde322ceba53dc32082a0395cd9f6dd288c52712804a61ec1edcd0eb33" Jan 27 13:10:42 crc kubenswrapper[4786]: I0127 13:10:42.541187 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccwr8" Jan 27 13:10:42 crc kubenswrapper[4786]: I0127 13:10:42.562995 4786 scope.go:117] "RemoveContainer" containerID="670e242c65d60996f1b19cc269a12fafbee9f1dafadf6e71cc5bb5afc2fadf20" Jan 27 13:10:42 crc kubenswrapper[4786]: I0127 13:10:42.568058 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ccwr8"] Jan 27 13:10:42 crc kubenswrapper[4786]: I0127 13:10:42.576400 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ccwr8"] Jan 27 13:10:42 crc kubenswrapper[4786]: I0127 13:10:42.598886 4786 scope.go:117] "RemoveContainer" containerID="99368af4aa647e5a971892be6f4ce7bb7458c82c864d55c4c54d3c0984c1bf79" Jan 27 13:10:43 crc kubenswrapper[4786]: I0127 13:10:43.472069 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acf3913c-c965-4da8-91b3-6d4f2fa40f6b" path="/var/lib/kubelet/pods/acf3913c-c965-4da8-91b3-6d4f2fa40f6b/volumes" Jan 27 13:10:44 crc kubenswrapper[4786]: I0127 13:10:44.435439 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:10:44 crc kubenswrapper[4786]: I0127 13:10:44.476538 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:10:44 crc kubenswrapper[4786]: I0127 13:10:44.553112 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerStarted","Data":"68b3b3585ee3cd83b41e2ece5024314ee69b7da4279bac6a5facbdf2f311dbb0"} Jan 27 13:10:45 crc kubenswrapper[4786]: I0127 13:10:45.286938 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z4pv9"] Jan 27 13:10:45 crc kubenswrapper[4786]: I0127 13:10:45.557436 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z4pv9" podUID="5c25e7dc-42ad-4c09-8187-354fd9f6d954" containerName="registry-server" containerID="cri-o://efdb11a53d07348d81f78a1bf9ea7f789c8956c3c3d914124e8ebb1385a7b2f0" gracePeriod=2 Jan 27 13:10:45 crc kubenswrapper[4786]: I0127 13:10:45.963437 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.150846 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c25e7dc-42ad-4c09-8187-354fd9f6d954-catalog-content\") pod \"5c25e7dc-42ad-4c09-8187-354fd9f6d954\" (UID: \"5c25e7dc-42ad-4c09-8187-354fd9f6d954\") " Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.151168 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svqb2\" (UniqueName: \"kubernetes.io/projected/5c25e7dc-42ad-4c09-8187-354fd9f6d954-kube-api-access-svqb2\") pod \"5c25e7dc-42ad-4c09-8187-354fd9f6d954\" (UID: \"5c25e7dc-42ad-4c09-8187-354fd9f6d954\") " Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.151210 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c25e7dc-42ad-4c09-8187-354fd9f6d954-utilities\") pod \"5c25e7dc-42ad-4c09-8187-354fd9f6d954\" (UID: \"5c25e7dc-42ad-4c09-8187-354fd9f6d954\") " Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.152091 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c25e7dc-42ad-4c09-8187-354fd9f6d954-utilities" (OuterVolumeSpecName: "utilities") pod "5c25e7dc-42ad-4c09-8187-354fd9f6d954" (UID: "5c25e7dc-42ad-4c09-8187-354fd9f6d954"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.156836 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c25e7dc-42ad-4c09-8187-354fd9f6d954-kube-api-access-svqb2" (OuterVolumeSpecName: "kube-api-access-svqb2") pod "5c25e7dc-42ad-4c09-8187-354fd9f6d954" (UID: "5c25e7dc-42ad-4c09-8187-354fd9f6d954"). InnerVolumeSpecName "kube-api-access-svqb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.199482 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c25e7dc-42ad-4c09-8187-354fd9f6d954-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c25e7dc-42ad-4c09-8187-354fd9f6d954" (UID: "5c25e7dc-42ad-4c09-8187-354fd9f6d954"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.252303 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svqb2\" (UniqueName: \"kubernetes.io/projected/5c25e7dc-42ad-4c09-8187-354fd9f6d954-kube-api-access-svqb2\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.252357 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c25e7dc-42ad-4c09-8187-354fd9f6d954-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.252376 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c25e7dc-42ad-4c09-8187-354fd9f6d954-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.567342 4786 generic.go:334] "Generic (PLEG): container finished" podID="5c25e7dc-42ad-4c09-8187-354fd9f6d954" containerID="efdb11a53d07348d81f78a1bf9ea7f789c8956c3c3d914124e8ebb1385a7b2f0" exitCode=0 Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.567402 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4pv9" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.567421 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4pv9" event={"ID":"5c25e7dc-42ad-4c09-8187-354fd9f6d954","Type":"ContainerDied","Data":"efdb11a53d07348d81f78a1bf9ea7f789c8956c3c3d914124e8ebb1385a7b2f0"} Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.567506 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4pv9" event={"ID":"5c25e7dc-42ad-4c09-8187-354fd9f6d954","Type":"ContainerDied","Data":"d42d3a84d47378eb5227212eb88a930f3e4eefe0c66ebd91b5d41ecbada36f66"} Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.567528 4786 scope.go:117] "RemoveContainer" containerID="efdb11a53d07348d81f78a1bf9ea7f789c8956c3c3d914124e8ebb1385a7b2f0" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.581314 4786 scope.go:117] "RemoveContainer" containerID="67d46a8243224ede482e0a761cad93e9e245955b37801d5ebc0445fe09658fdc" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.596902 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z4pv9"] Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.605766 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z4pv9"] Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.608911 4786 scope.go:117] "RemoveContainer" containerID="1bda6593ad146c6023d7d712e85bbbebb052d0c58d3e721678080531ef07d809" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.622250 4786 scope.go:117] "RemoveContainer" containerID="efdb11a53d07348d81f78a1bf9ea7f789c8956c3c3d914124e8ebb1385a7b2f0" Jan 27 13:10:46 crc kubenswrapper[4786]: E0127 13:10:46.622651 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efdb11a53d07348d81f78a1bf9ea7f789c8956c3c3d914124e8ebb1385a7b2f0\": container with ID starting with efdb11a53d07348d81f78a1bf9ea7f789c8956c3c3d914124e8ebb1385a7b2f0 not found: ID does not exist" containerID="efdb11a53d07348d81f78a1bf9ea7f789c8956c3c3d914124e8ebb1385a7b2f0" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.622700 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efdb11a53d07348d81f78a1bf9ea7f789c8956c3c3d914124e8ebb1385a7b2f0"} err="failed to get container status \"efdb11a53d07348d81f78a1bf9ea7f789c8956c3c3d914124e8ebb1385a7b2f0\": rpc error: code = NotFound desc = could not find container \"efdb11a53d07348d81f78a1bf9ea7f789c8956c3c3d914124e8ebb1385a7b2f0\": container with ID starting with efdb11a53d07348d81f78a1bf9ea7f789c8956c3c3d914124e8ebb1385a7b2f0 not found: ID does not exist" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.622734 4786 scope.go:117] "RemoveContainer" containerID="67d46a8243224ede482e0a761cad93e9e245955b37801d5ebc0445fe09658fdc" Jan 27 13:10:46 crc kubenswrapper[4786]: E0127 13:10:46.622996 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67d46a8243224ede482e0a761cad93e9e245955b37801d5ebc0445fe09658fdc\": container with ID starting with 67d46a8243224ede482e0a761cad93e9e245955b37801d5ebc0445fe09658fdc not found: ID does not exist" containerID="67d46a8243224ede482e0a761cad93e9e245955b37801d5ebc0445fe09658fdc" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.623018 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d46a8243224ede482e0a761cad93e9e245955b37801d5ebc0445fe09658fdc"} err="failed to get container status \"67d46a8243224ede482e0a761cad93e9e245955b37801d5ebc0445fe09658fdc\": rpc error: code = NotFound desc = could not find container \"67d46a8243224ede482e0a761cad93e9e245955b37801d5ebc0445fe09658fdc\": container with ID starting with 67d46a8243224ede482e0a761cad93e9e245955b37801d5ebc0445fe09658fdc not found: ID does not exist" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.623035 4786 scope.go:117] "RemoveContainer" containerID="1bda6593ad146c6023d7d712e85bbbebb052d0c58d3e721678080531ef07d809" Jan 27 13:10:46 crc kubenswrapper[4786]: E0127 13:10:46.623226 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bda6593ad146c6023d7d712e85bbbebb052d0c58d3e721678080531ef07d809\": container with ID starting with 1bda6593ad146c6023d7d712e85bbbebb052d0c58d3e721678080531ef07d809 not found: ID does not exist" containerID="1bda6593ad146c6023d7d712e85bbbebb052d0c58d3e721678080531ef07d809" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.623251 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bda6593ad146c6023d7d712e85bbbebb052d0c58d3e721678080531ef07d809"} err="failed to get container status \"1bda6593ad146c6023d7d712e85bbbebb052d0c58d3e721678080531ef07d809\": rpc error: code = NotFound desc = could not find container \"1bda6593ad146c6023d7d712e85bbbebb052d0c58d3e721678080531ef07d809\": container with ID starting with 1bda6593ad146c6023d7d712e85bbbebb052d0c58d3e721678080531ef07d809 not found: ID does not exist" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.987965 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:10:46 crc kubenswrapper[4786]: I0127 13:10:46.988005 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:10:47 crc kubenswrapper[4786]: I0127 13:10:47.042953 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:10:47 crc kubenswrapper[4786]: I0127 13:10:47.477238 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c25e7dc-42ad-4c09-8187-354fd9f6d954" path="/var/lib/kubelet/pods/5c25e7dc-42ad-4c09-8187-354fd9f6d954/volumes" Jan 27 13:10:47 crc kubenswrapper[4786]: I0127 13:10:47.639643 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.652398 4786 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.652891 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0" containerName="extract-utilities" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.652903 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0" containerName="extract-utilities" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.652912 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf3913c-c965-4da8-91b3-6d4f2fa40f6b" containerName="extract-content" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.652919 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf3913c-c965-4da8-91b3-6d4f2fa40f6b" containerName="extract-content" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.652931 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f2a321-e151-44e9-a16b-4c9e7b883b64" containerName="extract-utilities" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.652955 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f2a321-e151-44e9-a16b-4c9e7b883b64" containerName="extract-utilities" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.652962 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf3913c-c965-4da8-91b3-6d4f2fa40f6b" containerName="extract-utilities" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.652967 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf3913c-c965-4da8-91b3-6d4f2fa40f6b" containerName="extract-utilities" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.652977 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c25e7dc-42ad-4c09-8187-354fd9f6d954" containerName="extract-utilities" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.652984 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c25e7dc-42ad-4c09-8187-354fd9f6d954" containerName="extract-utilities" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.652995 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f2a321-e151-44e9-a16b-4c9e7b883b64" containerName="extract-content" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.653000 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f2a321-e151-44e9-a16b-4c9e7b883b64" containerName="extract-content" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.653010 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c25e7dc-42ad-4c09-8187-354fd9f6d954" containerName="registry-server" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.653016 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c25e7dc-42ad-4c09-8187-354fd9f6d954" containerName="registry-server" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.653033 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f2a321-e151-44e9-a16b-4c9e7b883b64" containerName="registry-server" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.653038 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f2a321-e151-44e9-a16b-4c9e7b883b64" containerName="registry-server" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.653047 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf3913c-c965-4da8-91b3-6d4f2fa40f6b" containerName="registry-server" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.653052 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf3913c-c965-4da8-91b3-6d4f2fa40f6b" containerName="registry-server" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.653062 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0" containerName="registry-server" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.653067 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0" containerName="registry-server" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.653074 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c25e7dc-42ad-4c09-8187-354fd9f6d954" containerName="extract-content" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.653081 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c25e7dc-42ad-4c09-8187-354fd9f6d954" containerName="extract-content" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.653087 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0" containerName="extract-content" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.653093 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0" containerName="extract-content" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.654986 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="29f8e8ac-fdcb-43f3-8128-d8ca56bc84d0" containerName="registry-server" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.655004 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f2a321-e151-44e9-a16b-4c9e7b883b64" containerName="registry-server" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.655017 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c25e7dc-42ad-4c09-8187-354fd9f6d954" containerName="registry-server" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.655024 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf3913c-c965-4da8-91b3-6d4f2fa40f6b" containerName="registry-server" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.657187 4786 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.658075 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262" gracePeriod=15 Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.658377 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.658403 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624" gracePeriod=15 Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.658716 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118" gracePeriod=15 Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.658735 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6" gracePeriod=15 Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.659111 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead" gracePeriod=15 Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.660278 4786 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.660814 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.660849 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.660882 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.660898 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.660927 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.660942 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.660968 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.660986 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.661035 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.661064 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.661110 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.661127 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 13:10:51 crc kubenswrapper[4786]: E0127 13:10:51.661149 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.661177 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.661706 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.661756 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.661788 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.661829 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.661862 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.661901 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.723141 4786 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.723205 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.821993 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.822036 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.822058 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.822102 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.822123 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.822155 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.822187 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.822231 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.923335 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.923484 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.923520 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.923483 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.923726 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.923841 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.923926 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.924056 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.924191 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.924327 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.924472 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.923992 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.924132 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.924400 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.924266 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:10:51 crc kubenswrapper[4786]: I0127 13:10:51.924730 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:10:52 crc kubenswrapper[4786]: I0127 13:10:52.605010 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 13:10:52 crc kubenswrapper[4786]: I0127 13:10:52.607061 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 13:10:52 crc kubenswrapper[4786]: I0127 13:10:52.607964 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624" exitCode=0 Jan 27 13:10:52 crc kubenswrapper[4786]: I0127 13:10:52.608091 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6" exitCode=0 Jan 27 13:10:52 crc kubenswrapper[4786]: I0127 13:10:52.608104 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118" exitCode=0 Jan 27 13:10:52 crc kubenswrapper[4786]: I0127 13:10:52.608113 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead" exitCode=2 Jan 27 13:10:52 crc kubenswrapper[4786]: I0127 13:10:52.608074 4786 scope.go:117] "RemoveContainer" containerID="144671fd31d8a204bc24d118472a1cac83cacecd700b0c8fb0e62d776dffab30" Jan 27 13:10:52 crc kubenswrapper[4786]: I0127 13:10:52.611176 4786 generic.go:334] "Generic (PLEG): container finished" podID="6716fde7-7c58-4306-99f9-67baca4a9238" containerID="b03d0f42cc35385326109b6cdbcc133f1c3b29562f43505219cacde99d65f891" exitCode=0 Jan 27 13:10:52 crc kubenswrapper[4786]: I0127 13:10:52.611249 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6716fde7-7c58-4306-99f9-67baca4a9238","Type":"ContainerDied","Data":"b03d0f42cc35385326109b6cdbcc133f1c3b29562f43505219cacde99d65f891"} Jan 27 13:10:52 crc kubenswrapper[4786]: I0127 13:10:52.612866 4786 status_manager.go:851] "Failed to get status for pod" podUID="6716fde7-7c58-4306-99f9-67baca4a9238" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:52 crc kubenswrapper[4786]: I0127 13:10:52.613209 4786 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:53 crc kubenswrapper[4786]: I0127 13:10:53.621528 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 13:10:53 crc kubenswrapper[4786]: I0127 13:10:53.947639 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:10:53 crc kubenswrapper[4786]: I0127 13:10:53.948944 4786 status_manager.go:851] "Failed to get status for pod" podUID="6716fde7-7c58-4306-99f9-67baca4a9238" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.033457 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.034527 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.035112 4786 status_manager.go:851] "Failed to get status for pod" podUID="6716fde7-7c58-4306-99f9-67baca4a9238" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.035473 4786 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.057003 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6716fde7-7c58-4306-99f9-67baca4a9238-var-lock\") pod \"6716fde7-7c58-4306-99f9-67baca4a9238\" (UID: \"6716fde7-7c58-4306-99f9-67baca4a9238\") " Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.057046 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6716fde7-7c58-4306-99f9-67baca4a9238-kubelet-dir\") pod \"6716fde7-7c58-4306-99f9-67baca4a9238\" (UID: \"6716fde7-7c58-4306-99f9-67baca4a9238\") " Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.057086 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6716fde7-7c58-4306-99f9-67baca4a9238-kube-api-access\") pod \"6716fde7-7c58-4306-99f9-67baca4a9238\" (UID: \"6716fde7-7c58-4306-99f9-67baca4a9238\") " Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.057845 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6716fde7-7c58-4306-99f9-67baca4a9238-var-lock" (OuterVolumeSpecName: "var-lock") pod "6716fde7-7c58-4306-99f9-67baca4a9238" (UID: "6716fde7-7c58-4306-99f9-67baca4a9238"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.057923 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6716fde7-7c58-4306-99f9-67baca4a9238-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6716fde7-7c58-4306-99f9-67baca4a9238" (UID: "6716fde7-7c58-4306-99f9-67baca4a9238"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.062732 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6716fde7-7c58-4306-99f9-67baca4a9238-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6716fde7-7c58-4306-99f9-67baca4a9238" (UID: "6716fde7-7c58-4306-99f9-67baca4a9238"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.158868 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.159313 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.159361 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.159681 4786 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6716fde7-7c58-4306-99f9-67baca4a9238-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.159699 4786 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6716fde7-7c58-4306-99f9-67baca4a9238-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.159714 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6716fde7-7c58-4306-99f9-67baca4a9238-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.159201 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.159772 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.159790 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.260764 4786 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.260817 4786 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.260829 4786 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.637629 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.638530 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262" exitCode=0 Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.638654 4786 scope.go:117] "RemoveContainer" containerID="013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.638799 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.641553 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6716fde7-7c58-4306-99f9-67baca4a9238","Type":"ContainerDied","Data":"ee5bf87a7955ac18f88930af255ffcc0b271dcb8e10a1a1ec4c61a4410ea3979"} Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.641597 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee5bf87a7955ac18f88930af255ffcc0b271dcb8e10a1a1ec4c61a4410ea3979" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.641677 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.656246 4786 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.656914 4786 status_manager.go:851] "Failed to get status for pod" podUID="6716fde7-7c58-4306-99f9-67baca4a9238" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.659377 4786 scope.go:117] "RemoveContainer" containerID="22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.659780 4786 status_manager.go:851] "Failed to get status for pod" podUID="6716fde7-7c58-4306-99f9-67baca4a9238" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.660264 4786 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.670409 4786 scope.go:117] "RemoveContainer" containerID="e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.681005 4786 scope.go:117] "RemoveContainer" containerID="786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.698777 4786 scope.go:117] "RemoveContainer" containerID="02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.712199 4786 scope.go:117] "RemoveContainer" containerID="120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.731362 4786 scope.go:117] "RemoveContainer" containerID="013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624" Jan 27 13:10:54 crc kubenswrapper[4786]: E0127 13:10:54.731877 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\": container with ID starting with 013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624 not found: ID does not exist" containerID="013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.731908 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624"} err="failed to get container status \"013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\": rpc error: code = NotFound desc = could not find container \"013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624\": container with ID starting with 013eb469cf4c20b7fbbaad7860661dc439f6554c2b296489fbb3b2aafd15b624 not found: ID does not exist" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.732449 4786 scope.go:117] "RemoveContainer" containerID="22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6" Jan 27 13:10:54 crc kubenswrapper[4786]: E0127 13:10:54.732985 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\": container with ID starting with 22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6 not found: ID does not exist" containerID="22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.733025 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6"} err="failed to get container status \"22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\": rpc error: code = NotFound desc = could not find container \"22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6\": container with ID starting with 22a001c1e0a01b6e4a3d212def27bd4808036f0f4dd5395d4ebbd8b8cf9fbcb6 not found: ID does not exist" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.733058 4786 scope.go:117] "RemoveContainer" containerID="e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118" Jan 27 13:10:54 crc kubenswrapper[4786]: E0127 13:10:54.733597 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\": container with ID starting with e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118 not found: ID does not exist" containerID="e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.733638 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118"} err="failed to get container status \"e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\": rpc error: code = NotFound desc = could not find container \"e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118\": container with ID starting with e00de8064231a695efe43b465ac740c45561bd6e1998201059115249a9bd8118 not found: ID does not exist" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.733659 4786 scope.go:117] "RemoveContainer" containerID="786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead" Jan 27 13:10:54 crc kubenswrapper[4786]: E0127 13:10:54.734825 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\": container with ID starting with 786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead not found: ID does not exist" containerID="786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.734849 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead"} err="failed to get container status \"786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\": rpc error: code = NotFound desc = could not find container \"786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead\": container with ID starting with 786e47abb0a0fe2528c42d12a795be2f8280ece1624d70c8a07349e668cecead not found: ID does not exist" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.734864 4786 scope.go:117] "RemoveContainer" containerID="02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262" Jan 27 13:10:54 crc kubenswrapper[4786]: E0127 13:10:54.735989 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\": container with ID starting with 02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262 not found: ID does not exist" containerID="02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.736014 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262"} err="failed to get container status \"02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\": rpc error: code = NotFound desc = could not find container \"02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262\": container with ID starting with 02a1b78b8bf81111cc6795d58ce27c4db183267b52d7308a585d8b1e42eb4262 not found: ID does not exist" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.736047 4786 scope.go:117] "RemoveContainer" containerID="120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f" Jan 27 13:10:54 crc kubenswrapper[4786]: E0127 13:10:54.736445 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\": container with ID starting with 120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f not found: ID does not exist" containerID="120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f" Jan 27 13:10:54 crc kubenswrapper[4786]: I0127 13:10:54.736468 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f"} err="failed to get container status \"120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\": rpc error: code = NotFound desc = could not find container \"120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f\": container with ID starting with 120e7fd99f980c48095095c5f35eebfa3f0bd82019868e66f86e09f1978a9b3f not found: ID does not exist" Jan 27 13:10:55 crc kubenswrapper[4786]: I0127 13:10:55.470559 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 13:10:55 crc kubenswrapper[4786]: E0127 13:10:55.935576 4786 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:55 crc kubenswrapper[4786]: E0127 13:10:55.936376 4786 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:55 crc kubenswrapper[4786]: E0127 13:10:55.936971 4786 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:55 crc kubenswrapper[4786]: E0127 13:10:55.938487 4786 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:55 crc kubenswrapper[4786]: E0127 13:10:55.938816 4786 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:55 crc kubenswrapper[4786]: I0127 13:10:55.938840 4786 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 13:10:55 crc kubenswrapper[4786]: E0127 13:10:55.939045 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="200ms" Jan 27 13:10:56 crc kubenswrapper[4786]: E0127 13:10:56.140334 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="400ms" Jan 27 13:10:56 crc kubenswrapper[4786]: E0127 13:10:56.540969 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="800ms" Jan 27 13:10:56 crc kubenswrapper[4786]: E0127 13:10:56.704663 4786 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.5:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:56 crc kubenswrapper[4786]: I0127 13:10:56.705097 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:56 crc kubenswrapper[4786]: E0127 13:10:56.726570 4786 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.5:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e9892114861d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 13:10:56.725787093 +0000 UTC m=+239.936401212,LastTimestamp:2026-01-27 13:10:56.725787093 +0000 UTC m=+239.936401212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 13:10:57 crc kubenswrapper[4786]: E0127 13:10:57.342279 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="1.6s" Jan 27 13:10:57 crc kubenswrapper[4786]: E0127 13:10:57.384589 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:10:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:10:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:10:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T13:10:57Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:57 crc kubenswrapper[4786]: E0127 13:10:57.384862 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:57 crc kubenswrapper[4786]: E0127 13:10:57.385093 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:57 crc kubenswrapper[4786]: E0127 13:10:57.385344 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:57 crc kubenswrapper[4786]: E0127 13:10:57.385571 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:57 crc kubenswrapper[4786]: E0127 13:10:57.385589 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 13:10:57 crc kubenswrapper[4786]: I0127 13:10:57.468097 4786 status_manager.go:851] "Failed to get status for pod" podUID="6716fde7-7c58-4306-99f9-67baca4a9238" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:57 crc kubenswrapper[4786]: I0127 13:10:57.659316 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"28cd0bc413e45aba559ccd7af7eaabf11a5721cdd06d691d9dc35ba81a4a59fc"} Jan 27 13:10:57 crc kubenswrapper[4786]: I0127 13:10:57.659366 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3c2360913c62c922d47fc235ef4ad8ed8ac6fe77aa41e2f80a4869233fd1e3cc"} Jan 27 13:10:57 crc kubenswrapper[4786]: I0127 13:10:57.659933 4786 status_manager.go:851] "Failed to get status for pod" podUID="6716fde7-7c58-4306-99f9-67baca4a9238" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:10:57 crc kubenswrapper[4786]: E0127 13:10:57.659936 4786 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.5:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:10:58 crc kubenswrapper[4786]: E0127 13:10:58.943860 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="3.2s" Jan 27 13:11:01 crc kubenswrapper[4786]: E0127 13:11:01.133594 4786 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.5:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e9892114861d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 13:10:56.725787093 +0000 UTC m=+239.936401212,LastTimestamp:2026-01-27 13:10:56.725787093 +0000 UTC m=+239.936401212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 13:11:02 crc kubenswrapper[4786]: E0127 13:11:02.144410 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="6.4s" Jan 27 13:11:04 crc kubenswrapper[4786]: I0127 13:11:04.082799 4786 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 13:11:04 crc kubenswrapper[4786]: I0127 13:11:04.083121 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 13:11:04 crc kubenswrapper[4786]: I0127 13:11:04.463983 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:11:04 crc kubenswrapper[4786]: I0127 13:11:04.464907 4786 status_manager.go:851] "Failed to get status for pod" podUID="6716fde7-7c58-4306-99f9-67baca4a9238" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:11:04 crc kubenswrapper[4786]: I0127 13:11:04.478514 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8573a07-5c6b-490a-abd2-e38fe66ef4f4" Jan 27 13:11:04 crc kubenswrapper[4786]: I0127 13:11:04.478554 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8573a07-5c6b-490a-abd2-e38fe66ef4f4" Jan 27 13:11:04 crc kubenswrapper[4786]: E0127 13:11:04.479105 4786 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:11:04 crc kubenswrapper[4786]: I0127 13:11:04.479555 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:11:04 crc kubenswrapper[4786]: I0127 13:11:04.695878 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 13:11:04 crc kubenswrapper[4786]: I0127 13:11:04.696279 4786 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1" exitCode=1 Jan 27 13:11:04 crc kubenswrapper[4786]: I0127 13:11:04.696359 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1"} Jan 27 13:11:04 crc kubenswrapper[4786]: I0127 13:11:04.696762 4786 scope.go:117] "RemoveContainer" containerID="181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1" Jan 27 13:11:04 crc kubenswrapper[4786]: I0127 13:11:04.697649 4786 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:11:04 crc kubenswrapper[4786]: I0127 13:11:04.698352 4786 status_manager.go:851] "Failed to get status for pod" podUID="6716fde7-7c58-4306-99f9-67baca4a9238" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:11:04 crc kubenswrapper[4786]: I0127 13:11:04.698374 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b65822344eb3c99cf0d6aca53f29742376bc3db91f91ebca84a463ddea0da02a"} Jan 27 13:11:05 crc kubenswrapper[4786]: I0127 13:11:05.637174 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" podUID="8156e329-ca23-4079-8b23-ba0c32cc89a9" containerName="oauth-openshift" containerID="cri-o://3dc291bed09e2e266bf31c4257b99aa71c0336fe7c0143d2c4eeb431200b7460" gracePeriod=15 Jan 27 13:11:05 crc kubenswrapper[4786]: I0127 13:11:05.706766 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 13:11:05 crc kubenswrapper[4786]: I0127 13:11:05.707122 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"822fa3d5243a9bfcf58e6201e07aed0dddcd71d19cb2981b8c50c16f46a7292d"} Jan 27 13:11:05 crc kubenswrapper[4786]: I0127 13:11:05.708188 4786 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:11:05 crc kubenswrapper[4786]: I0127 13:11:05.708407 4786 status_manager.go:851] "Failed to get status for pod" podUID="6716fde7-7c58-4306-99f9-67baca4a9238" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:11:05 crc kubenswrapper[4786]: I0127 13:11:05.709014 4786 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="3418a489bb103c5a4a3848abccf713d2c508c1db57f964ed3732b0f974dff7f2" exitCode=0 Jan 27 13:11:05 crc kubenswrapper[4786]: I0127 13:11:05.709040 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"3418a489bb103c5a4a3848abccf713d2c508c1db57f964ed3732b0f974dff7f2"} Jan 27 13:11:05 crc kubenswrapper[4786]: I0127 13:11:05.709597 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8573a07-5c6b-490a-abd2-e38fe66ef4f4" Jan 27 13:11:05 crc kubenswrapper[4786]: I0127 13:11:05.709678 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8573a07-5c6b-490a-abd2-e38fe66ef4f4" Jan 27 13:11:05 crc kubenswrapper[4786]: I0127 13:11:05.709788 4786 status_manager.go:851] "Failed to get status for pod" podUID="6716fde7-7c58-4306-99f9-67baca4a9238" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:11:05 crc kubenswrapper[4786]: E0127 13:11:05.710338 4786 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:11:05 crc kubenswrapper[4786]: I0127 13:11:05.710669 4786 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.012918 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.013906 4786 status_manager.go:851] "Failed to get status for pod" podUID="8156e329-ca23-4079-8b23-ba0c32cc89a9" pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-nsc4d\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.014319 4786 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.014634 4786 status_manager.go:851] "Failed to get status for pod" podUID="6716fde7-7c58-4306-99f9-67baca4a9238" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.112268 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvmwm\" (UniqueName: \"kubernetes.io/projected/8156e329-ca23-4079-8b23-ba0c32cc89a9-kube-api-access-hvmwm\") pod \"8156e329-ca23-4079-8b23-ba0c32cc89a9\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.112745 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-provider-selection\") pod \"8156e329-ca23-4079-8b23-ba0c32cc89a9\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.112823 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-cliconfig\") pod \"8156e329-ca23-4079-8b23-ba0c32cc89a9\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.112859 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-trusted-ca-bundle\") pod \"8156e329-ca23-4079-8b23-ba0c32cc89a9\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.112888 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-router-certs\") pod \"8156e329-ca23-4079-8b23-ba0c32cc89a9\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.112925 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-session\") pod \"8156e329-ca23-4079-8b23-ba0c32cc89a9\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.113053 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-error\") pod \"8156e329-ca23-4079-8b23-ba0c32cc89a9\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.113090 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-ocp-branding-template\") pod \"8156e329-ca23-4079-8b23-ba0c32cc89a9\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.113156 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-service-ca\") pod \"8156e329-ca23-4079-8b23-ba0c32cc89a9\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.113191 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-login\") pod \"8156e329-ca23-4079-8b23-ba0c32cc89a9\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.113215 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-serving-cert\") pod \"8156e329-ca23-4079-8b23-ba0c32cc89a9\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.113237 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-idp-0-file-data\") pod \"8156e329-ca23-4079-8b23-ba0c32cc89a9\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.113274 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-audit-policies\") pod \"8156e329-ca23-4079-8b23-ba0c32cc89a9\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.113298 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8156e329-ca23-4079-8b23-ba0c32cc89a9-audit-dir\") pod \"8156e329-ca23-4079-8b23-ba0c32cc89a9\" (UID: \"8156e329-ca23-4079-8b23-ba0c32cc89a9\") " Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.113648 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8156e329-ca23-4079-8b23-ba0c32cc89a9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "8156e329-ca23-4079-8b23-ba0c32cc89a9" (UID: "8156e329-ca23-4079-8b23-ba0c32cc89a9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.114035 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "8156e329-ca23-4079-8b23-ba0c32cc89a9" (UID: "8156e329-ca23-4079-8b23-ba0c32cc89a9"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.114054 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "8156e329-ca23-4079-8b23-ba0c32cc89a9" (UID: "8156e329-ca23-4079-8b23-ba0c32cc89a9"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.115053 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "8156e329-ca23-4079-8b23-ba0c32cc89a9" (UID: "8156e329-ca23-4079-8b23-ba0c32cc89a9"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.115897 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "8156e329-ca23-4079-8b23-ba0c32cc89a9" (UID: "8156e329-ca23-4079-8b23-ba0c32cc89a9"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.119110 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "8156e329-ca23-4079-8b23-ba0c32cc89a9" (UID: "8156e329-ca23-4079-8b23-ba0c32cc89a9"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.119258 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8156e329-ca23-4079-8b23-ba0c32cc89a9-kube-api-access-hvmwm" (OuterVolumeSpecName: "kube-api-access-hvmwm") pod "8156e329-ca23-4079-8b23-ba0c32cc89a9" (UID: "8156e329-ca23-4079-8b23-ba0c32cc89a9"). InnerVolumeSpecName "kube-api-access-hvmwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.119478 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "8156e329-ca23-4079-8b23-ba0c32cc89a9" (UID: "8156e329-ca23-4079-8b23-ba0c32cc89a9"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.119735 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "8156e329-ca23-4079-8b23-ba0c32cc89a9" (UID: "8156e329-ca23-4079-8b23-ba0c32cc89a9"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.120133 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "8156e329-ca23-4079-8b23-ba0c32cc89a9" (UID: "8156e329-ca23-4079-8b23-ba0c32cc89a9"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.120296 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "8156e329-ca23-4079-8b23-ba0c32cc89a9" (UID: "8156e329-ca23-4079-8b23-ba0c32cc89a9"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.120528 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "8156e329-ca23-4079-8b23-ba0c32cc89a9" (UID: "8156e329-ca23-4079-8b23-ba0c32cc89a9"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.120572 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "8156e329-ca23-4079-8b23-ba0c32cc89a9" (UID: "8156e329-ca23-4079-8b23-ba0c32cc89a9"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.122928 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "8156e329-ca23-4079-8b23-ba0c32cc89a9" (UID: "8156e329-ca23-4079-8b23-ba0c32cc89a9"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.214535 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.214569 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.214584 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.214593 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.214618 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.214632 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.214641 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.214651 4786 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.214661 4786 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8156e329-ca23-4079-8b23-ba0c32cc89a9-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.214669 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvmwm\" (UniqueName: \"kubernetes.io/projected/8156e329-ca23-4079-8b23-ba0c32cc89a9-kube-api-access-hvmwm\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.214680 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.214690 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.214698 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.214708 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8156e329-ca23-4079-8b23-ba0c32cc89a9-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.715056 4786 generic.go:334] "Generic (PLEG): container finished" podID="8156e329-ca23-4079-8b23-ba0c32cc89a9" containerID="3dc291bed09e2e266bf31c4257b99aa71c0336fe7c0143d2c4eeb431200b7460" exitCode=0 Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.715152 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.715567 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" event={"ID":"8156e329-ca23-4079-8b23-ba0c32cc89a9","Type":"ContainerDied","Data":"3dc291bed09e2e266bf31c4257b99aa71c0336fe7c0143d2c4eeb431200b7460"} Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.715618 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nsc4d" event={"ID":"8156e329-ca23-4079-8b23-ba0c32cc89a9","Type":"ContainerDied","Data":"63d955c9eb944fbc9b633d81f22a480c0b0015118439f0499ec3e88b22dc9f57"} Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.715635 4786 scope.go:117] "RemoveContainer" containerID="3dc291bed09e2e266bf31c4257b99aa71c0336fe7c0143d2c4eeb431200b7460" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.721339 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6e46a6be4354ba59f429c79353a6d84638535d72874ad28184b010f443826060"} Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.721375 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ff6a89fed2f1812ca14a1e4afb51dde133275b41b6e28a86d16db945e6b16b78"} Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.721389 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b29c0b9c5e9bf2c384959aa96fede907ae529cdc0bcacaa5c01c9cec4696f692"} Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.721464 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e1747600454e6f65292ec316a0a8807ddd8ed8a55b6d32ac7bc138e1eea52dd9"} Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.733680 4786 scope.go:117] "RemoveContainer" containerID="3dc291bed09e2e266bf31c4257b99aa71c0336fe7c0143d2c4eeb431200b7460" Jan 27 13:11:06 crc kubenswrapper[4786]: E0127 13:11:06.734296 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dc291bed09e2e266bf31c4257b99aa71c0336fe7c0143d2c4eeb431200b7460\": container with ID starting with 3dc291bed09e2e266bf31c4257b99aa71c0336fe7c0143d2c4eeb431200b7460 not found: ID does not exist" containerID="3dc291bed09e2e266bf31c4257b99aa71c0336fe7c0143d2c4eeb431200b7460" Jan 27 13:11:06 crc kubenswrapper[4786]: I0127 13:11:06.734333 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dc291bed09e2e266bf31c4257b99aa71c0336fe7c0143d2c4eeb431200b7460"} err="failed to get container status \"3dc291bed09e2e266bf31c4257b99aa71c0336fe7c0143d2c4eeb431200b7460\": rpc error: code = NotFound desc = could not find container \"3dc291bed09e2e266bf31c4257b99aa71c0336fe7c0143d2c4eeb431200b7460\": container with ID starting with 3dc291bed09e2e266bf31c4257b99aa71c0336fe7c0143d2c4eeb431200b7460 not found: ID does not exist" Jan 27 13:11:07 crc kubenswrapper[4786]: I0127 13:11:07.730728 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e64f2d0dd2df6e71c7600b4897a191a23388d8c98bc71797f55fa089b8a79f45"} Jan 27 13:11:07 crc kubenswrapper[4786]: I0127 13:11:07.731675 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:11:07 crc kubenswrapper[4786]: I0127 13:11:07.731869 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8573a07-5c6b-490a-abd2-e38fe66ef4f4" Jan 27 13:11:07 crc kubenswrapper[4786]: I0127 13:11:07.731967 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8573a07-5c6b-490a-abd2-e38fe66ef4f4" Jan 27 13:11:09 crc kubenswrapper[4786]: I0127 13:11:09.479851 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:11:09 crc kubenswrapper[4786]: I0127 13:11:09.480187 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:11:09 crc kubenswrapper[4786]: I0127 13:11:09.486885 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:11:10 crc kubenswrapper[4786]: I0127 13:11:10.064035 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:11:12 crc kubenswrapper[4786]: I0127 13:11:12.166311 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:11:12 crc kubenswrapper[4786]: I0127 13:11:12.166674 4786 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 13:11:12 crc kubenswrapper[4786]: I0127 13:11:12.166742 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 13:11:12 crc kubenswrapper[4786]: I0127 13:11:12.831665 4786 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:11:12 crc kubenswrapper[4786]: I0127 13:11:12.971027 4786 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="88e033b6-a85d-4c48-ba94-593b2816cdbf" Jan 27 13:11:13 crc kubenswrapper[4786]: I0127 13:11:13.761781 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8573a07-5c6b-490a-abd2-e38fe66ef4f4" Jan 27 13:11:13 crc kubenswrapper[4786]: I0127 13:11:13.762022 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8573a07-5c6b-490a-abd2-e38fe66ef4f4" Jan 27 13:11:13 crc kubenswrapper[4786]: I0127 13:11:13.764354 4786 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="88e033b6-a85d-4c48-ba94-593b2816cdbf" Jan 27 13:11:13 crc kubenswrapper[4786]: I0127 13:11:13.765220 4786 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://e1747600454e6f65292ec316a0a8807ddd8ed8a55b6d32ac7bc138e1eea52dd9" Jan 27 13:11:13 crc kubenswrapper[4786]: I0127 13:11:13.765240 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:11:14 crc kubenswrapper[4786]: I0127 13:11:14.766269 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8573a07-5c6b-490a-abd2-e38fe66ef4f4" Jan 27 13:11:14 crc kubenswrapper[4786]: I0127 13:11:14.766301 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8573a07-5c6b-490a-abd2-e38fe66ef4f4" Jan 27 13:11:14 crc kubenswrapper[4786]: I0127 13:11:14.769153 4786 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="88e033b6-a85d-4c48-ba94-593b2816cdbf" Jan 27 13:11:22 crc kubenswrapper[4786]: I0127 13:11:22.167062 4786 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 13:11:22 crc kubenswrapper[4786]: I0127 13:11:22.167590 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 13:11:22 crc kubenswrapper[4786]: I0127 13:11:22.482570 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 13:11:22 crc kubenswrapper[4786]: I0127 13:11:22.966147 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 13:11:23 crc kubenswrapper[4786]: I0127 13:11:23.212203 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 13:11:23 crc kubenswrapper[4786]: I0127 13:11:23.821947 4786 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 13:11:23 crc kubenswrapper[4786]: I0127 13:11:23.826948 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nsc4d","openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 13:11:23 crc kubenswrapper[4786]: I0127 13:11:23.827002 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 13:11:23 crc kubenswrapper[4786]: I0127 13:11:23.834062 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 13:11:23 crc kubenswrapper[4786]: I0127 13:11:23.881809 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=11.881785494 podStartE2EDuration="11.881785494s" podCreationTimestamp="2026-01-27 13:11:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:11:23.861418958 +0000 UTC m=+267.072033107" watchObservedRunningTime="2026-01-27 13:11:23.881785494 +0000 UTC m=+267.092399653" Jan 27 13:11:24 crc kubenswrapper[4786]: I0127 13:11:24.006947 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 13:11:24 crc kubenswrapper[4786]: I0127 13:11:24.040557 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 13:11:24 crc kubenswrapper[4786]: I0127 13:11:24.141062 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 13:11:24 crc kubenswrapper[4786]: I0127 13:11:24.204578 4786 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 13:11:24 crc kubenswrapper[4786]: I0127 13:11:24.204900 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://28cd0bc413e45aba559ccd7af7eaabf11a5721cdd06d691d9dc35ba81a4a59fc" gracePeriod=5 Jan 27 13:11:24 crc kubenswrapper[4786]: I0127 13:11:24.328028 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 13:11:24 crc kubenswrapper[4786]: I0127 13:11:24.421256 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 13:11:24 crc kubenswrapper[4786]: I0127 13:11:24.630080 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 13:11:24 crc kubenswrapper[4786]: I0127 13:11:24.807228 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 13:11:24 crc kubenswrapper[4786]: I0127 13:11:24.826482 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 13:11:24 crc kubenswrapper[4786]: I0127 13:11:24.846289 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 13:11:24 crc kubenswrapper[4786]: I0127 13:11:24.948752 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 13:11:24 crc kubenswrapper[4786]: I0127 13:11:24.959681 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 13:11:25 crc kubenswrapper[4786]: I0127 13:11:24.998034 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 13:11:25 crc kubenswrapper[4786]: I0127 13:11:25.055530 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 13:11:25 crc kubenswrapper[4786]: I0127 13:11:25.094362 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 13:11:25 crc kubenswrapper[4786]: I0127 13:11:25.158573 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 13:11:25 crc kubenswrapper[4786]: I0127 13:11:25.377058 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 13:11:25 crc kubenswrapper[4786]: I0127 13:11:25.464288 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 13:11:25 crc kubenswrapper[4786]: I0127 13:11:25.471156 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8156e329-ca23-4079-8b23-ba0c32cc89a9" path="/var/lib/kubelet/pods/8156e329-ca23-4079-8b23-ba0c32cc89a9/volumes" Jan 27 13:11:25 crc kubenswrapper[4786]: I0127 13:11:25.576191 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 13:11:25 crc kubenswrapper[4786]: I0127 13:11:25.582647 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 13:11:25 crc kubenswrapper[4786]: I0127 13:11:25.734738 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 13:11:25 crc kubenswrapper[4786]: I0127 13:11:25.792203 4786 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 13:11:25 crc kubenswrapper[4786]: I0127 13:11:25.943122 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.106235 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.148748 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.221225 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.314474 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.365136 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.429010 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.475963 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.480223 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.584840 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.657826 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.665828 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.673251 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.685786 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.704934 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.901580 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.917494 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 13:11:26 crc kubenswrapper[4786]: I0127 13:11:26.945196 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 13:11:27 crc kubenswrapper[4786]: I0127 13:11:27.008978 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 13:11:27 crc kubenswrapper[4786]: I0127 13:11:27.009538 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 13:11:27 crc kubenswrapper[4786]: I0127 13:11:27.067367 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 13:11:27 crc kubenswrapper[4786]: I0127 13:11:27.237677 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 13:11:27 crc kubenswrapper[4786]: I0127 13:11:27.270915 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 13:11:27 crc kubenswrapper[4786]: I0127 13:11:27.289754 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 13:11:27 crc kubenswrapper[4786]: I0127 13:11:27.369416 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 13:11:27 crc kubenswrapper[4786]: I0127 13:11:27.460885 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 13:11:27 crc kubenswrapper[4786]: I0127 13:11:27.630897 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 13:11:27 crc kubenswrapper[4786]: I0127 13:11:27.713818 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 13:11:27 crc kubenswrapper[4786]: I0127 13:11:27.746902 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 13:11:27 crc kubenswrapper[4786]: I0127 13:11:27.965132 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 13:11:28 crc kubenswrapper[4786]: I0127 13:11:28.062734 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 13:11:28 crc kubenswrapper[4786]: I0127 13:11:28.124258 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 13:11:28 crc kubenswrapper[4786]: I0127 13:11:28.314206 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 13:11:28 crc kubenswrapper[4786]: I0127 13:11:28.430686 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 13:11:28 crc kubenswrapper[4786]: I0127 13:11:28.449755 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 13:11:28 crc kubenswrapper[4786]: I0127 13:11:28.458354 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 13:11:28 crc kubenswrapper[4786]: I0127 13:11:28.646919 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 13:11:28 crc kubenswrapper[4786]: I0127 13:11:28.656984 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 13:11:28 crc kubenswrapper[4786]: I0127 13:11:28.675815 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 13:11:28 crc kubenswrapper[4786]: I0127 13:11:28.747264 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 13:11:28 crc kubenswrapper[4786]: I0127 13:11:28.803773 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 13:11:28 crc kubenswrapper[4786]: I0127 13:11:28.863986 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 13:11:28 crc kubenswrapper[4786]: I0127 13:11:28.960947 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 13:11:28 crc kubenswrapper[4786]: I0127 13:11:28.975745 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.077495 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.101118 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.192192 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.197224 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.232266 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.248481 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.256237 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.274305 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.490586 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.495807 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.515347 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.550851 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.595004 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.613734 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.621016 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.704968 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.753997 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.803392 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.803450 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.815110 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.818235 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.845580 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.845632 4786 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="28cd0bc413e45aba559ccd7af7eaabf11a5721cdd06d691d9dc35ba81a4a59fc" exitCode=137 Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.845669 4786 scope.go:117] "RemoveContainer" containerID="28cd0bc413e45aba559ccd7af7eaabf11a5721cdd06d691d9dc35ba81a4a59fc" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.845777 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.860867 4786 scope.go:117] "RemoveContainer" containerID="28cd0bc413e45aba559ccd7af7eaabf11a5721cdd06d691d9dc35ba81a4a59fc" Jan 27 13:11:29 crc kubenswrapper[4786]: E0127 13:11:29.861203 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28cd0bc413e45aba559ccd7af7eaabf11a5721cdd06d691d9dc35ba81a4a59fc\": container with ID starting with 28cd0bc413e45aba559ccd7af7eaabf11a5721cdd06d691d9dc35ba81a4a59fc not found: ID does not exist" containerID="28cd0bc413e45aba559ccd7af7eaabf11a5721cdd06d691d9dc35ba81a4a59fc" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.861240 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28cd0bc413e45aba559ccd7af7eaabf11a5721cdd06d691d9dc35ba81a4a59fc"} err="failed to get container status \"28cd0bc413e45aba559ccd7af7eaabf11a5721cdd06d691d9dc35ba81a4a59fc\": rpc error: code = NotFound desc = could not find container \"28cd0bc413e45aba559ccd7af7eaabf11a5721cdd06d691d9dc35ba81a4a59fc\": container with ID starting with 28cd0bc413e45aba559ccd7af7eaabf11a5721cdd06d691d9dc35ba81a4a59fc not found: ID does not exist" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.901390 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.901450 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.901478 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.901497 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.901514 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.901676 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.901701 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.901706 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.901733 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.909121 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.972641 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.987778 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 13:11:29 crc kubenswrapper[4786]: I0127 13:11:29.989142 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.002891 4786 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.002917 4786 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.002929 4786 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.002940 4786 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.002952 4786 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.026327 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.116404 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.142001 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.149101 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.179043 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.197643 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.203352 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.208697 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.300160 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.352321 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.461030 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.466524 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.488301 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.640118 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.704556 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.726798 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.727543 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.746841 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.786542 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.793756 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.797002 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.831322 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.852794 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 13:11:30 crc kubenswrapper[4786]: I0127 13:11:30.870407 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.001340 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.024204 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.112229 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.169383 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.192381 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.199873 4786 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.216849 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.222179 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.416329 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.461051 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.477034 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.491804 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.501112 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.646230 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.650246 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.687240 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.715347 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.788456 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.825560 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.828216 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.888394 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.933768 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.965392 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 13:11:31 crc kubenswrapper[4786]: I0127 13:11:31.977440 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.062034 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.063794 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.166864 4786 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.166924 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.166987 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.167728 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"822fa3d5243a9bfcf58e6201e07aed0dddcd71d19cb2981b8c50c16f46a7292d"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.167827 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://822fa3d5243a9bfcf58e6201e07aed0dddcd71d19cb2981b8c50c16f46a7292d" gracePeriod=30 Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.282001 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.325764 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.358666 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.359527 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.387801 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.456915 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.589755 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.591847 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.707338 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.753134 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.858930 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.928412 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.942779 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 13:11:32 crc kubenswrapper[4786]: I0127 13:11:32.991880 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.102359 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.208033 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.254623 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.279565 4786 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.358293 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.377791 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.401490 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.418414 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.521189 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.546306 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.557596 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.621501 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.726677 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.736487 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.758415 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.766037 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.853298 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.909386 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.934011 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.954837 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 13:11:33 crc kubenswrapper[4786]: I0127 13:11:33.954909 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 13:11:34 crc kubenswrapper[4786]: I0127 13:11:34.042822 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 13:11:34 crc kubenswrapper[4786]: I0127 13:11:34.192518 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 13:11:34 crc kubenswrapper[4786]: I0127 13:11:34.238779 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 13:11:34 crc kubenswrapper[4786]: I0127 13:11:34.277132 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 13:11:34 crc kubenswrapper[4786]: I0127 13:11:34.428669 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 13:11:34 crc kubenswrapper[4786]: I0127 13:11:34.493348 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 13:11:34 crc kubenswrapper[4786]: I0127 13:11:34.501274 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 13:11:34 crc kubenswrapper[4786]: I0127 13:11:34.636312 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 13:11:34 crc kubenswrapper[4786]: I0127 13:11:34.677582 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 13:11:34 crc kubenswrapper[4786]: I0127 13:11:34.922407 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 13:11:34 crc kubenswrapper[4786]: I0127 13:11:34.953199 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 13:11:34 crc kubenswrapper[4786]: I0127 13:11:34.960447 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 13:11:35 crc kubenswrapper[4786]: I0127 13:11:35.005463 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 13:11:35 crc kubenswrapper[4786]: I0127 13:11:35.057340 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 13:11:35 crc kubenswrapper[4786]: I0127 13:11:35.109679 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 13:11:35 crc kubenswrapper[4786]: I0127 13:11:35.136154 4786 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 13:11:35 crc kubenswrapper[4786]: I0127 13:11:35.217765 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 13:11:35 crc kubenswrapper[4786]: I0127 13:11:35.271578 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 13:11:35 crc kubenswrapper[4786]: I0127 13:11:35.293674 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 13:11:35 crc kubenswrapper[4786]: I0127 13:11:35.342037 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 13:11:35 crc kubenswrapper[4786]: I0127 13:11:35.442645 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 13:11:35 crc kubenswrapper[4786]: I0127 13:11:35.659491 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 13:11:35 crc kubenswrapper[4786]: I0127 13:11:35.686256 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 13:11:35 crc kubenswrapper[4786]: I0127 13:11:35.937842 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 13:11:35 crc kubenswrapper[4786]: I0127 13:11:35.987399 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 13:11:35 crc kubenswrapper[4786]: I0127 13:11:35.995004 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.001255 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.130846 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.203029 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.233895 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.303070 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.358936 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.372475 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.399996 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.433423 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.441380 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.448153 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.481328 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.486797 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.615335 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.743734 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.769107 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.794996 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.945475 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 13:11:36 crc kubenswrapper[4786]: I0127 13:11:36.947130 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 13:11:37 crc kubenswrapper[4786]: I0127 13:11:37.181368 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 13:11:37 crc kubenswrapper[4786]: I0127 13:11:37.246718 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 13:11:37 crc kubenswrapper[4786]: I0127 13:11:37.405591 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 13:11:37 crc kubenswrapper[4786]: I0127 13:11:37.791123 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 13:11:37 crc kubenswrapper[4786]: I0127 13:11:37.809483 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 13:11:37 crc kubenswrapper[4786]: I0127 13:11:37.863534 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 13:11:37 crc kubenswrapper[4786]: I0127 13:11:37.870232 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 13:11:37 crc kubenswrapper[4786]: I0127 13:11:37.912367 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 13:11:38 crc kubenswrapper[4786]: I0127 13:11:38.084207 4786 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 13:11:38 crc kubenswrapper[4786]: I0127 13:11:38.473222 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 13:11:38 crc kubenswrapper[4786]: I0127 13:11:38.535782 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 13:11:38 crc kubenswrapper[4786]: I0127 13:11:38.552131 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 13:11:38 crc kubenswrapper[4786]: I0127 13:11:38.640414 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 13:11:39 crc kubenswrapper[4786]: I0127 13:11:39.064963 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 13:11:39 crc kubenswrapper[4786]: I0127 13:11:39.290794 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 13:11:39 crc kubenswrapper[4786]: I0127 13:11:39.667977 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 13:11:40 crc kubenswrapper[4786]: I0127 13:11:40.192829 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.485480 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-56f4fc5f47-wll2z"] Jan 27 13:11:44 crc kubenswrapper[4786]: E0127 13:11:44.485842 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6716fde7-7c58-4306-99f9-67baca4a9238" containerName="installer" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.485866 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6716fde7-7c58-4306-99f9-67baca4a9238" containerName="installer" Jan 27 13:11:44 crc kubenswrapper[4786]: E0127 13:11:44.485889 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8156e329-ca23-4079-8b23-ba0c32cc89a9" containerName="oauth-openshift" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.485901 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8156e329-ca23-4079-8b23-ba0c32cc89a9" containerName="oauth-openshift" Jan 27 13:11:44 crc kubenswrapper[4786]: E0127 13:11:44.485914 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.485926 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.486087 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="8156e329-ca23-4079-8b23-ba0c32cc89a9" containerName="oauth-openshift" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.486118 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.486136 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="6716fde7-7c58-4306-99f9-67baca4a9238" containerName="installer" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.486714 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.488525 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.489271 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.489538 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.490923 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.491141 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.491324 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.492638 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.492643 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.492992 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.493154 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.493483 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.498813 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.502509 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.503036 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56f4fc5f47-wll2z"] Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.506671 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.512535 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.590256 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.590310 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-user-template-login\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.590345 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1686ba2-05c9-4145-9905-3c7e10b29294-audit-dir\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.590368 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-session\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.590389 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-router-certs\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.590487 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1686ba2-05c9-4145-9905-3c7e10b29294-audit-policies\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.590563 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.590668 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-service-ca\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.590695 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.590744 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.590770 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.590795 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.590819 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzrvv\" (UniqueName: \"kubernetes.io/projected/a1686ba2-05c9-4145-9905-3c7e10b29294-kube-api-access-hzrvv\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.590935 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-user-template-error\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.691568 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.691631 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-user-template-login\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.691666 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1686ba2-05c9-4145-9905-3c7e10b29294-audit-dir\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.691733 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-session\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.691804 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1686ba2-05c9-4145-9905-3c7e10b29294-audit-dir\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.691857 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-router-certs\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.691886 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1686ba2-05c9-4145-9905-3c7e10b29294-audit-policies\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.691908 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.691932 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-service-ca\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.692760 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.692785 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.692802 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.692818 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.692813 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1686ba2-05c9-4145-9905-3c7e10b29294-audit-policies\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.692831 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzrvv\" (UniqueName: \"kubernetes.io/projected/a1686ba2-05c9-4145-9905-3c7e10b29294-kube-api-access-hzrvv\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.692968 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-user-template-error\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.693683 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-service-ca\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.694461 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.696762 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.697945 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-user-template-error\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.698246 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-session\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.698413 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.698444 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.698581 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-router-certs\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.699027 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.701711 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-user-template-login\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.707462 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1686ba2-05c9-4145-9905-3c7e10b29294-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.710134 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzrvv\" (UniqueName: \"kubernetes.io/projected/a1686ba2-05c9-4145-9905-3c7e10b29294-kube-api-access-hzrvv\") pod \"oauth-openshift-56f4fc5f47-wll2z\" (UID: \"a1686ba2-05c9-4145-9905-3c7e10b29294\") " pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.810396 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:44 crc kubenswrapper[4786]: I0127 13:11:44.973186 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56f4fc5f47-wll2z"] Jan 27 13:11:45 crc kubenswrapper[4786]: I0127 13:11:45.937632 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" event={"ID":"a1686ba2-05c9-4145-9905-3c7e10b29294","Type":"ContainerStarted","Data":"5a637157eee25c5d5aa180edb254913bbf01b6efd71787cd040a1eb6e817cb75"} Jan 27 13:11:45 crc kubenswrapper[4786]: I0127 13:11:45.937683 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" event={"ID":"a1686ba2-05c9-4145-9905-3c7e10b29294","Type":"ContainerStarted","Data":"d3f081502d19d304ada96ab9113a017249411cd213a38f13adfa0347e89ec76b"} Jan 27 13:11:45 crc kubenswrapper[4786]: I0127 13:11:45.957658 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" podStartSLOduration=65.957640372 podStartE2EDuration="1m5.957640372s" podCreationTimestamp="2026-01-27 13:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:11:45.955359332 +0000 UTC m=+289.165973471" watchObservedRunningTime="2026-01-27 13:11:45.957640372 +0000 UTC m=+289.168254501" Jan 27 13:11:46 crc kubenswrapper[4786]: I0127 13:11:46.942478 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:46 crc kubenswrapper[4786]: I0127 13:11:46.947569 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56f4fc5f47-wll2z" Jan 27 13:11:54 crc kubenswrapper[4786]: I0127 13:11:54.994793 4786 generic.go:334] "Generic (PLEG): container finished" podID="294ae02f-293d-4db8-9a9f-6d3878c8ccf9" containerID="134f9aafba7557b0fffd00fb582a5deae1327822d1ae4b6a2899a49c060abdd3" exitCode=0 Jan 27 13:11:54 crc kubenswrapper[4786]: I0127 13:11:54.994933 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" event={"ID":"294ae02f-293d-4db8-9a9f-6d3878c8ccf9","Type":"ContainerDied","Data":"134f9aafba7557b0fffd00fb582a5deae1327822d1ae4b6a2899a49c060abdd3"} Jan 27 13:11:54 crc kubenswrapper[4786]: I0127 13:11:54.995721 4786 scope.go:117] "RemoveContainer" containerID="134f9aafba7557b0fffd00fb582a5deae1327822d1ae4b6a2899a49c060abdd3" Jan 27 13:11:56 crc kubenswrapper[4786]: I0127 13:11:56.002593 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" event={"ID":"294ae02f-293d-4db8-9a9f-6d3878c8ccf9","Type":"ContainerStarted","Data":"ab14ac4bd6bddefd8577218400d2fc3c74e23314875c9c8a5b0b4b9a1d9120cd"} Jan 27 13:11:56 crc kubenswrapper[4786]: I0127 13:11:56.002955 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:11:56 crc kubenswrapper[4786]: I0127 13:11:56.005228 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:11:57 crc kubenswrapper[4786]: I0127 13:11:57.252829 4786 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.355037 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wtdrs"] Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.355231 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" podUID="63594f44-fa91-43fe-b1da-d1df4f593e45" containerName="controller-manager" containerID="cri-o://83da580caeaf7d9b27deb658fabc0d91fde7f6645c1c3939e643056b0d3456cd" gracePeriod=30 Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.382705 4786 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wtdrs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.383285 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" podUID="63594f44-fa91-43fe-b1da-d1df4f593e45" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.468441 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf"] Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.469758 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" podUID="975cb02b-51f0-4d7b-a59c-b25126c0c1c2" containerName="route-controller-manager" containerID="cri-o://950c66bdc38069594b8a14e32366ea161d35fb5dfb62df1cba10cdbfec6c6a1d" gracePeriod=30 Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.720221 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.785419 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.858073 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-config\") pod \"63594f44-fa91-43fe-b1da-d1df4f593e45\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.858139 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-config\") pod \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\" (UID: \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\") " Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.858171 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-client-ca\") pod \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\" (UID: \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\") " Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.858212 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5jpn\" (UniqueName: \"kubernetes.io/projected/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-kube-api-access-l5jpn\") pod \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\" (UID: \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\") " Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.858254 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-serving-cert\") pod \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\" (UID: \"975cb02b-51f0-4d7b-a59c-b25126c0c1c2\") " Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.858278 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-proxy-ca-bundles\") pod \"63594f44-fa91-43fe-b1da-d1df4f593e45\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.858330 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65s5z\" (UniqueName: \"kubernetes.io/projected/63594f44-fa91-43fe-b1da-d1df4f593e45-kube-api-access-65s5z\") pod \"63594f44-fa91-43fe-b1da-d1df4f593e45\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.858372 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-client-ca\") pod \"63594f44-fa91-43fe-b1da-d1df4f593e45\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.858394 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63594f44-fa91-43fe-b1da-d1df4f593e45-serving-cert\") pod \"63594f44-fa91-43fe-b1da-d1df4f593e45\" (UID: \"63594f44-fa91-43fe-b1da-d1df4f593e45\") " Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.859117 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-config" (OuterVolumeSpecName: "config") pod "63594f44-fa91-43fe-b1da-d1df4f593e45" (UID: "63594f44-fa91-43fe-b1da-d1df4f593e45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.859280 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-config" (OuterVolumeSpecName: "config") pod "975cb02b-51f0-4d7b-a59c-b25126c0c1c2" (UID: "975cb02b-51f0-4d7b-a59c-b25126c0c1c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.859462 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "63594f44-fa91-43fe-b1da-d1df4f593e45" (UID: "63594f44-fa91-43fe-b1da-d1df4f593e45"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.859628 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-client-ca" (OuterVolumeSpecName: "client-ca") pod "975cb02b-51f0-4d7b-a59c-b25126c0c1c2" (UID: "975cb02b-51f0-4d7b-a59c-b25126c0c1c2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.859769 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-client-ca" (OuterVolumeSpecName: "client-ca") pod "63594f44-fa91-43fe-b1da-d1df4f593e45" (UID: "63594f44-fa91-43fe-b1da-d1df4f593e45"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.868110 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "975cb02b-51f0-4d7b-a59c-b25126c0c1c2" (UID: "975cb02b-51f0-4d7b-a59c-b25126c0c1c2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.869310 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-kube-api-access-l5jpn" (OuterVolumeSpecName: "kube-api-access-l5jpn") pod "975cb02b-51f0-4d7b-a59c-b25126c0c1c2" (UID: "975cb02b-51f0-4d7b-a59c-b25126c0c1c2"). InnerVolumeSpecName "kube-api-access-l5jpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.869398 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63594f44-fa91-43fe-b1da-d1df4f593e45-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "63594f44-fa91-43fe-b1da-d1df4f593e45" (UID: "63594f44-fa91-43fe-b1da-d1df4f593e45"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.877856 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63594f44-fa91-43fe-b1da-d1df4f593e45-kube-api-access-65s5z" (OuterVolumeSpecName: "kube-api-access-65s5z") pod "63594f44-fa91-43fe-b1da-d1df4f593e45" (UID: "63594f44-fa91-43fe-b1da-d1df4f593e45"). InnerVolumeSpecName "kube-api-access-65s5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.959745 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.959782 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.959794 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.959812 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5jpn\" (UniqueName: \"kubernetes.io/projected/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-kube-api-access-l5jpn\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.959826 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/975cb02b-51f0-4d7b-a59c-b25126c0c1c2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.959838 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.959846 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65s5z\" (UniqueName: \"kubernetes.io/projected/63594f44-fa91-43fe-b1da-d1df4f593e45-kube-api-access-65s5z\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.959853 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63594f44-fa91-43fe-b1da-d1df4f593e45-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:58 crc kubenswrapper[4786]: I0127 13:11:58.959861 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63594f44-fa91-43fe-b1da-d1df4f593e45-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.017419 4786 generic.go:334] "Generic (PLEG): container finished" podID="63594f44-fa91-43fe-b1da-d1df4f593e45" containerID="83da580caeaf7d9b27deb658fabc0d91fde7f6645c1c3939e643056b0d3456cd" exitCode=0 Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.017469 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" event={"ID":"63594f44-fa91-43fe-b1da-d1df4f593e45","Type":"ContainerDied","Data":"83da580caeaf7d9b27deb658fabc0d91fde7f6645c1c3939e643056b0d3456cd"} Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.017520 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" event={"ID":"63594f44-fa91-43fe-b1da-d1df4f593e45","Type":"ContainerDied","Data":"a2913eaee8dc7b9f4e066bb9aa6e221b72763a4d164fc4a390943e5289c8ca5a"} Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.017514 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wtdrs" Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.017565 4786 scope.go:117] "RemoveContainer" containerID="83da580caeaf7d9b27deb658fabc0d91fde7f6645c1c3939e643056b0d3456cd" Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.019192 4786 generic.go:334] "Generic (PLEG): container finished" podID="975cb02b-51f0-4d7b-a59c-b25126c0c1c2" containerID="950c66bdc38069594b8a14e32366ea161d35fb5dfb62df1cba10cdbfec6c6a1d" exitCode=0 Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.019234 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.019250 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" event={"ID":"975cb02b-51f0-4d7b-a59c-b25126c0c1c2","Type":"ContainerDied","Data":"950c66bdc38069594b8a14e32366ea161d35fb5dfb62df1cba10cdbfec6c6a1d"} Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.019282 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf" event={"ID":"975cb02b-51f0-4d7b-a59c-b25126c0c1c2","Type":"ContainerDied","Data":"ee201c2b3040d2acc017a776a90d3656fd6177971cc33c2323cab9cb6f10a000"} Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.035545 4786 scope.go:117] "RemoveContainer" containerID="83da580caeaf7d9b27deb658fabc0d91fde7f6645c1c3939e643056b0d3456cd" Jan 27 13:11:59 crc kubenswrapper[4786]: E0127 13:11:59.037277 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83da580caeaf7d9b27deb658fabc0d91fde7f6645c1c3939e643056b0d3456cd\": container with ID starting with 83da580caeaf7d9b27deb658fabc0d91fde7f6645c1c3939e643056b0d3456cd not found: ID does not exist" containerID="83da580caeaf7d9b27deb658fabc0d91fde7f6645c1c3939e643056b0d3456cd" Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.037309 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83da580caeaf7d9b27deb658fabc0d91fde7f6645c1c3939e643056b0d3456cd"} err="failed to get container status \"83da580caeaf7d9b27deb658fabc0d91fde7f6645c1c3939e643056b0d3456cd\": rpc error: code = NotFound desc = could not find container \"83da580caeaf7d9b27deb658fabc0d91fde7f6645c1c3939e643056b0d3456cd\": container with ID starting with 83da580caeaf7d9b27deb658fabc0d91fde7f6645c1c3939e643056b0d3456cd not found: ID does not exist" Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.037332 4786 scope.go:117] "RemoveContainer" containerID="950c66bdc38069594b8a14e32366ea161d35fb5dfb62df1cba10cdbfec6c6a1d" Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.045368 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wtdrs"] Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.048943 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wtdrs"] Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.054820 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf"] Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.055562 4786 scope.go:117] "RemoveContainer" containerID="950c66bdc38069594b8a14e32366ea161d35fb5dfb62df1cba10cdbfec6c6a1d" Jan 27 13:11:59 crc kubenswrapper[4786]: E0127 13:11:59.056081 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"950c66bdc38069594b8a14e32366ea161d35fb5dfb62df1cba10cdbfec6c6a1d\": container with ID starting with 950c66bdc38069594b8a14e32366ea161d35fb5dfb62df1cba10cdbfec6c6a1d not found: ID does not exist" containerID="950c66bdc38069594b8a14e32366ea161d35fb5dfb62df1cba10cdbfec6c6a1d" Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.056167 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"950c66bdc38069594b8a14e32366ea161d35fb5dfb62df1cba10cdbfec6c6a1d"} err="failed to get container status \"950c66bdc38069594b8a14e32366ea161d35fb5dfb62df1cba10cdbfec6c6a1d\": rpc error: code = NotFound desc = could not find container \"950c66bdc38069594b8a14e32366ea161d35fb5dfb62df1cba10cdbfec6c6a1d\": container with ID starting with 950c66bdc38069594b8a14e32366ea161d35fb5dfb62df1cba10cdbfec6c6a1d not found: ID does not exist" Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.059751 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-72vpf"] Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.472211 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63594f44-fa91-43fe-b1da-d1df4f593e45" path="/var/lib/kubelet/pods/63594f44-fa91-43fe-b1da-d1df4f593e45/volumes" Jan 27 13:11:59 crc kubenswrapper[4786]: I0127 13:11:59.473333 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="975cb02b-51f0-4d7b-a59c-b25126c0c1c2" path="/var/lib/kubelet/pods/975cb02b-51f0-4d7b-a59c-b25126c0c1c2/volumes" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.494312 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57b9555869-dn8vt"] Jan 27 13:12:00 crc kubenswrapper[4786]: E0127 13:12:00.494575 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63594f44-fa91-43fe-b1da-d1df4f593e45" containerName="controller-manager" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.494589 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="63594f44-fa91-43fe-b1da-d1df4f593e45" containerName="controller-manager" Jan 27 13:12:00 crc kubenswrapper[4786]: E0127 13:12:00.494598 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975cb02b-51f0-4d7b-a59c-b25126c0c1c2" containerName="route-controller-manager" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.494618 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="975cb02b-51f0-4d7b-a59c-b25126c0c1c2" containerName="route-controller-manager" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.494734 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="63594f44-fa91-43fe-b1da-d1df4f593e45" containerName="controller-manager" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.494745 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="975cb02b-51f0-4d7b-a59c-b25126c0c1c2" containerName="route-controller-manager" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.495141 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.497098 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.497216 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.497562 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.498813 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.500081 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.501599 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd"] Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.502398 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.503925 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.505101 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.505308 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.505456 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.505580 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.505738 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.505902 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.508730 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.510196 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd"] Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.513323 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57b9555869-dn8vt"] Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.581921 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-proxy-ca-bundles\") pod \"controller-manager-57b9555869-dn8vt\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.581988 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-config\") pod \"controller-manager-57b9555869-dn8vt\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.582031 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-client-ca\") pod \"controller-manager-57b9555869-dn8vt\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.582058 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdftz\" (UniqueName: \"kubernetes.io/projected/22a07492-3b52-4508-ac59-f10c1eb94bd5-kube-api-access-fdftz\") pod \"route-controller-manager-6595c9664f-wp9fd\" (UID: \"22a07492-3b52-4508-ac59-f10c1eb94bd5\") " pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.582127 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22a07492-3b52-4508-ac59-f10c1eb94bd5-serving-cert\") pod \"route-controller-manager-6595c9664f-wp9fd\" (UID: \"22a07492-3b52-4508-ac59-f10c1eb94bd5\") " pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.582144 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22a07492-3b52-4508-ac59-f10c1eb94bd5-client-ca\") pod \"route-controller-manager-6595c9664f-wp9fd\" (UID: \"22a07492-3b52-4508-ac59-f10c1eb94bd5\") " pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.582176 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckbr8\" (UniqueName: \"kubernetes.io/projected/753c4346-bc77-4a0b-9644-b2faea3f3229-kube-api-access-ckbr8\") pod \"controller-manager-57b9555869-dn8vt\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.582197 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22a07492-3b52-4508-ac59-f10c1eb94bd5-config\") pod \"route-controller-manager-6595c9664f-wp9fd\" (UID: \"22a07492-3b52-4508-ac59-f10c1eb94bd5\") " pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.582211 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/753c4346-bc77-4a0b-9644-b2faea3f3229-serving-cert\") pod \"controller-manager-57b9555869-dn8vt\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.682860 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22a07492-3b52-4508-ac59-f10c1eb94bd5-client-ca\") pod \"route-controller-manager-6595c9664f-wp9fd\" (UID: \"22a07492-3b52-4508-ac59-f10c1eb94bd5\") " pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.682905 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckbr8\" (UniqueName: \"kubernetes.io/projected/753c4346-bc77-4a0b-9644-b2faea3f3229-kube-api-access-ckbr8\") pod \"controller-manager-57b9555869-dn8vt\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.682930 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22a07492-3b52-4508-ac59-f10c1eb94bd5-config\") pod \"route-controller-manager-6595c9664f-wp9fd\" (UID: \"22a07492-3b52-4508-ac59-f10c1eb94bd5\") " pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.682959 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/753c4346-bc77-4a0b-9644-b2faea3f3229-serving-cert\") pod \"controller-manager-57b9555869-dn8vt\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.682988 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-proxy-ca-bundles\") pod \"controller-manager-57b9555869-dn8vt\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.683006 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-config\") pod \"controller-manager-57b9555869-dn8vt\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.683025 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-client-ca\") pod \"controller-manager-57b9555869-dn8vt\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.683045 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdftz\" (UniqueName: \"kubernetes.io/projected/22a07492-3b52-4508-ac59-f10c1eb94bd5-kube-api-access-fdftz\") pod \"route-controller-manager-6595c9664f-wp9fd\" (UID: \"22a07492-3b52-4508-ac59-f10c1eb94bd5\") " pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.683098 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22a07492-3b52-4508-ac59-f10c1eb94bd5-serving-cert\") pod \"route-controller-manager-6595c9664f-wp9fd\" (UID: \"22a07492-3b52-4508-ac59-f10c1eb94bd5\") " pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.683787 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22a07492-3b52-4508-ac59-f10c1eb94bd5-client-ca\") pod \"route-controller-manager-6595c9664f-wp9fd\" (UID: \"22a07492-3b52-4508-ac59-f10c1eb94bd5\") " pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.684493 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-client-ca\") pod \"controller-manager-57b9555869-dn8vt\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.685056 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22a07492-3b52-4508-ac59-f10c1eb94bd5-config\") pod \"route-controller-manager-6595c9664f-wp9fd\" (UID: \"22a07492-3b52-4508-ac59-f10c1eb94bd5\") " pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.685151 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-config\") pod \"controller-manager-57b9555869-dn8vt\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.685511 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-proxy-ca-bundles\") pod \"controller-manager-57b9555869-dn8vt\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.687780 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/753c4346-bc77-4a0b-9644-b2faea3f3229-serving-cert\") pod \"controller-manager-57b9555869-dn8vt\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.692739 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22a07492-3b52-4508-ac59-f10c1eb94bd5-serving-cert\") pod \"route-controller-manager-6595c9664f-wp9fd\" (UID: \"22a07492-3b52-4508-ac59-f10c1eb94bd5\") " pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.701348 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckbr8\" (UniqueName: \"kubernetes.io/projected/753c4346-bc77-4a0b-9644-b2faea3f3229-kube-api-access-ckbr8\") pod \"controller-manager-57b9555869-dn8vt\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.708402 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdftz\" (UniqueName: \"kubernetes.io/projected/22a07492-3b52-4508-ac59-f10c1eb94bd5-kube-api-access-fdftz\") pod \"route-controller-manager-6595c9664f-wp9fd\" (UID: \"22a07492-3b52-4508-ac59-f10c1eb94bd5\") " pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.858041 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:00 crc kubenswrapper[4786]: I0127 13:12:00.865292 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:01 crc kubenswrapper[4786]: I0127 13:12:01.031346 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57b9555869-dn8vt"] Jan 27 13:12:01 crc kubenswrapper[4786]: I0127 13:12:01.275334 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd"] Jan 27 13:12:01 crc kubenswrapper[4786]: W0127 13:12:01.277479 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22a07492_3b52_4508_ac59_f10c1eb94bd5.slice/crio-0ad2e2146f428673a82eed3cd4ca80bfd5e908a3c9ad97302aa6490b9b211335 WatchSource:0}: Error finding container 0ad2e2146f428673a82eed3cd4ca80bfd5e908a3c9ad97302aa6490b9b211335: Status 404 returned error can't find the container with id 0ad2e2146f428673a82eed3cd4ca80bfd5e908a3c9ad97302aa6490b9b211335 Jan 27 13:12:02 crc kubenswrapper[4786]: I0127 13:12:02.040809 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" event={"ID":"22a07492-3b52-4508-ac59-f10c1eb94bd5","Type":"ContainerStarted","Data":"b6263ec2e3ecded6710e17913bb153c65c7f672f3d4370190aafd815d24deb8b"} Jan 27 13:12:02 crc kubenswrapper[4786]: I0127 13:12:02.040865 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" event={"ID":"22a07492-3b52-4508-ac59-f10c1eb94bd5","Type":"ContainerStarted","Data":"0ad2e2146f428673a82eed3cd4ca80bfd5e908a3c9ad97302aa6490b9b211335"} Jan 27 13:12:02 crc kubenswrapper[4786]: I0127 13:12:02.041017 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:02 crc kubenswrapper[4786]: I0127 13:12:02.043287 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" event={"ID":"753c4346-bc77-4a0b-9644-b2faea3f3229","Type":"ContainerStarted","Data":"2c2991671e658ff78770847a8774fa744e9b97d102292f7ff8f62d8cbbae8108"} Jan 27 13:12:02 crc kubenswrapper[4786]: I0127 13:12:02.043321 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" event={"ID":"753c4346-bc77-4a0b-9644-b2faea3f3229","Type":"ContainerStarted","Data":"7af8f102c921c802260a522e446f1168d167269d97f357d0fc9ac1eb7b93dac8"} Jan 27 13:12:02 crc kubenswrapper[4786]: I0127 13:12:02.043656 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:02 crc kubenswrapper[4786]: I0127 13:12:02.047777 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:02 crc kubenswrapper[4786]: I0127 13:12:02.052073 4786 patch_prober.go:28] interesting pod/route-controller-manager-6595c9664f-wp9fd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 13:12:02 crc kubenswrapper[4786]: [+]log ok Jan 27 13:12:02 crc kubenswrapper[4786]: [-]poststarthook/max-in-flight-filter failed: reason withheld Jan 27 13:12:02 crc kubenswrapper[4786]: [-]poststarthook/storage-object-count-tracker-hook failed: reason withheld Jan 27 13:12:02 crc kubenswrapper[4786]: healthz check failed Jan 27 13:12:02 crc kubenswrapper[4786]: I0127 13:12:02.052157 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" podUID="22a07492-3b52-4508-ac59-f10c1eb94bd5" containerName="route-controller-manager" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 13:12:02 crc kubenswrapper[4786]: I0127 13:12:02.060157 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" podStartSLOduration=4.060105129 podStartE2EDuration="4.060105129s" podCreationTimestamp="2026-01-27 13:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:12:02.056074833 +0000 UTC m=+305.266688942" watchObservedRunningTime="2026-01-27 13:12:02.060105129 +0000 UTC m=+305.270719288" Jan 27 13:12:02 crc kubenswrapper[4786]: I0127 13:12:02.085513 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" podStartSLOduration=4.085479988 podStartE2EDuration="4.085479988s" podCreationTimestamp="2026-01-27 13:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:12:02.082955502 +0000 UTC m=+305.293569621" watchObservedRunningTime="2026-01-27 13:12:02.085479988 +0000 UTC m=+305.296094147" Jan 27 13:12:03 crc kubenswrapper[4786]: I0127 13:12:03.049115 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 13:12:03 crc kubenswrapper[4786]: I0127 13:12:03.050562 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 13:12:03 crc kubenswrapper[4786]: I0127 13:12:03.050594 4786 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="822fa3d5243a9bfcf58e6201e07aed0dddcd71d19cb2981b8c50c16f46a7292d" exitCode=137 Jan 27 13:12:03 crc kubenswrapper[4786]: I0127 13:12:03.051145 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"822fa3d5243a9bfcf58e6201e07aed0dddcd71d19cb2981b8c50c16f46a7292d"} Jan 27 13:12:03 crc kubenswrapper[4786]: I0127 13:12:03.051180 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cf39afcbde1ef4f729828a0dad4044525c03d0aa05e5a4a34a61bb9238f25873"} Jan 27 13:12:03 crc kubenswrapper[4786]: I0127 13:12:03.051199 4786 scope.go:117] "RemoveContainer" containerID="181c870b27579fdf828f52cee2ce312219d03f4d4c298f83cf4f74d0e5af0fc1" Jan 27 13:12:03 crc kubenswrapper[4786]: I0127 13:12:03.066918 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:04 crc kubenswrapper[4786]: I0127 13:12:04.061374 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 13:12:10 crc kubenswrapper[4786]: I0127 13:12:10.063971 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:12:12 crc kubenswrapper[4786]: I0127 13:12:12.166787 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:12:12 crc kubenswrapper[4786]: I0127 13:12:12.172707 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:12:13 crc kubenswrapper[4786]: I0127 13:12:13.133763 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 13:12:21 crc kubenswrapper[4786]: I0127 13:12:21.755199 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd"] Jan 27 13:12:21 crc kubenswrapper[4786]: I0127 13:12:21.755817 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" podUID="22a07492-3b52-4508-ac59-f10c1eb94bd5" containerName="route-controller-manager" containerID="cri-o://b6263ec2e3ecded6710e17913bb153c65c7f672f3d4370190aafd815d24deb8b" gracePeriod=30 Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.171421 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.175075 4786 generic.go:334] "Generic (PLEG): container finished" podID="22a07492-3b52-4508-ac59-f10c1eb94bd5" containerID="b6263ec2e3ecded6710e17913bb153c65c7f672f3d4370190aafd815d24deb8b" exitCode=0 Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.175121 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" event={"ID":"22a07492-3b52-4508-ac59-f10c1eb94bd5","Type":"ContainerDied","Data":"b6263ec2e3ecded6710e17913bb153c65c7f672f3d4370190aafd815d24deb8b"} Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.175152 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" event={"ID":"22a07492-3b52-4508-ac59-f10c1eb94bd5","Type":"ContainerDied","Data":"0ad2e2146f428673a82eed3cd4ca80bfd5e908a3c9ad97302aa6490b9b211335"} Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.175179 4786 scope.go:117] "RemoveContainer" containerID="b6263ec2e3ecded6710e17913bb153c65c7f672f3d4370190aafd815d24deb8b" Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.175164 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd" Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.193109 4786 scope.go:117] "RemoveContainer" containerID="b6263ec2e3ecded6710e17913bb153c65c7f672f3d4370190aafd815d24deb8b" Jan 27 13:12:22 crc kubenswrapper[4786]: E0127 13:12:22.193582 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6263ec2e3ecded6710e17913bb153c65c7f672f3d4370190aafd815d24deb8b\": container with ID starting with b6263ec2e3ecded6710e17913bb153c65c7f672f3d4370190aafd815d24deb8b not found: ID does not exist" containerID="b6263ec2e3ecded6710e17913bb153c65c7f672f3d4370190aafd815d24deb8b" Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.193669 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6263ec2e3ecded6710e17913bb153c65c7f672f3d4370190aafd815d24deb8b"} err="failed to get container status \"b6263ec2e3ecded6710e17913bb153c65c7f672f3d4370190aafd815d24deb8b\": rpc error: code = NotFound desc = could not find container \"b6263ec2e3ecded6710e17913bb153c65c7f672f3d4370190aafd815d24deb8b\": container with ID starting with b6263ec2e3ecded6710e17913bb153c65c7f672f3d4370190aafd815d24deb8b not found: ID does not exist" Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.193930 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22a07492-3b52-4508-ac59-f10c1eb94bd5-client-ca\") pod \"22a07492-3b52-4508-ac59-f10c1eb94bd5\" (UID: \"22a07492-3b52-4508-ac59-f10c1eb94bd5\") " Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.194013 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22a07492-3b52-4508-ac59-f10c1eb94bd5-serving-cert\") pod \"22a07492-3b52-4508-ac59-f10c1eb94bd5\" (UID: \"22a07492-3b52-4508-ac59-f10c1eb94bd5\") " Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.194063 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22a07492-3b52-4508-ac59-f10c1eb94bd5-config\") pod \"22a07492-3b52-4508-ac59-f10c1eb94bd5\" (UID: \"22a07492-3b52-4508-ac59-f10c1eb94bd5\") " Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.194919 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22a07492-3b52-4508-ac59-f10c1eb94bd5-client-ca" (OuterVolumeSpecName: "client-ca") pod "22a07492-3b52-4508-ac59-f10c1eb94bd5" (UID: "22a07492-3b52-4508-ac59-f10c1eb94bd5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.195085 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22a07492-3b52-4508-ac59-f10c1eb94bd5-config" (OuterVolumeSpecName: "config") pod "22a07492-3b52-4508-ac59-f10c1eb94bd5" (UID: "22a07492-3b52-4508-ac59-f10c1eb94bd5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.200536 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22a07492-3b52-4508-ac59-f10c1eb94bd5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "22a07492-3b52-4508-ac59-f10c1eb94bd5" (UID: "22a07492-3b52-4508-ac59-f10c1eb94bd5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.295076 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdftz\" (UniqueName: \"kubernetes.io/projected/22a07492-3b52-4508-ac59-f10c1eb94bd5-kube-api-access-fdftz\") pod \"22a07492-3b52-4508-ac59-f10c1eb94bd5\" (UID: \"22a07492-3b52-4508-ac59-f10c1eb94bd5\") " Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.295977 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22a07492-3b52-4508-ac59-f10c1eb94bd5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.295998 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22a07492-3b52-4508-ac59-f10c1eb94bd5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.296033 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22a07492-3b52-4508-ac59-f10c1eb94bd5-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.298518 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22a07492-3b52-4508-ac59-f10c1eb94bd5-kube-api-access-fdftz" (OuterVolumeSpecName: "kube-api-access-fdftz") pod "22a07492-3b52-4508-ac59-f10c1eb94bd5" (UID: "22a07492-3b52-4508-ac59-f10c1eb94bd5"). InnerVolumeSpecName "kube-api-access-fdftz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.397048 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdftz\" (UniqueName: \"kubernetes.io/projected/22a07492-3b52-4508-ac59-f10c1eb94bd5-kube-api-access-fdftz\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.498739 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd"] Jan 27 13:12:22 crc kubenswrapper[4786]: I0127 13:12:22.502365 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6595c9664f-wp9fd"] Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.474546 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22a07492-3b52-4508-ac59-f10c1eb94bd5" path="/var/lib/kubelet/pods/22a07492-3b52-4508-ac59-f10c1eb94bd5/volumes" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.508156 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp"] Jan 27 13:12:23 crc kubenswrapper[4786]: E0127 13:12:23.508428 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22a07492-3b52-4508-ac59-f10c1eb94bd5" containerName="route-controller-manager" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.508445 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="22a07492-3b52-4508-ac59-f10c1eb94bd5" containerName="route-controller-manager" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.508591 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="22a07492-3b52-4508-ac59-f10c1eb94bd5" containerName="route-controller-manager" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.509082 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.511265 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.512025 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.512255 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.512561 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.512762 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.512927 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.520778 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp"] Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.612229 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bdd97be-6eee-4790-a0ad-f8eb95001c47-config\") pod \"route-controller-manager-6f579bbdcb-bwfzp\" (UID: \"5bdd97be-6eee-4790-a0ad-f8eb95001c47\") " pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.612299 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bdd97be-6eee-4790-a0ad-f8eb95001c47-client-ca\") pod \"route-controller-manager-6f579bbdcb-bwfzp\" (UID: \"5bdd97be-6eee-4790-a0ad-f8eb95001c47\") " pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.612344 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57jrd\" (UniqueName: \"kubernetes.io/projected/5bdd97be-6eee-4790-a0ad-f8eb95001c47-kube-api-access-57jrd\") pod \"route-controller-manager-6f579bbdcb-bwfzp\" (UID: \"5bdd97be-6eee-4790-a0ad-f8eb95001c47\") " pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.612370 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bdd97be-6eee-4790-a0ad-f8eb95001c47-serving-cert\") pod \"route-controller-manager-6f579bbdcb-bwfzp\" (UID: \"5bdd97be-6eee-4790-a0ad-f8eb95001c47\") " pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.713765 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bdd97be-6eee-4790-a0ad-f8eb95001c47-client-ca\") pod \"route-controller-manager-6f579bbdcb-bwfzp\" (UID: \"5bdd97be-6eee-4790-a0ad-f8eb95001c47\") " pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.713828 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57jrd\" (UniqueName: \"kubernetes.io/projected/5bdd97be-6eee-4790-a0ad-f8eb95001c47-kube-api-access-57jrd\") pod \"route-controller-manager-6f579bbdcb-bwfzp\" (UID: \"5bdd97be-6eee-4790-a0ad-f8eb95001c47\") " pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.713866 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bdd97be-6eee-4790-a0ad-f8eb95001c47-serving-cert\") pod \"route-controller-manager-6f579bbdcb-bwfzp\" (UID: \"5bdd97be-6eee-4790-a0ad-f8eb95001c47\") " pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.713918 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bdd97be-6eee-4790-a0ad-f8eb95001c47-config\") pod \"route-controller-manager-6f579bbdcb-bwfzp\" (UID: \"5bdd97be-6eee-4790-a0ad-f8eb95001c47\") " pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.715080 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bdd97be-6eee-4790-a0ad-f8eb95001c47-client-ca\") pod \"route-controller-manager-6f579bbdcb-bwfzp\" (UID: \"5bdd97be-6eee-4790-a0ad-f8eb95001c47\") " pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.715083 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bdd97be-6eee-4790-a0ad-f8eb95001c47-config\") pod \"route-controller-manager-6f579bbdcb-bwfzp\" (UID: \"5bdd97be-6eee-4790-a0ad-f8eb95001c47\") " pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.719196 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bdd97be-6eee-4790-a0ad-f8eb95001c47-serving-cert\") pod \"route-controller-manager-6f579bbdcb-bwfzp\" (UID: \"5bdd97be-6eee-4790-a0ad-f8eb95001c47\") " pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.733241 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57jrd\" (UniqueName: \"kubernetes.io/projected/5bdd97be-6eee-4790-a0ad-f8eb95001c47-kube-api-access-57jrd\") pod \"route-controller-manager-6f579bbdcb-bwfzp\" (UID: \"5bdd97be-6eee-4790-a0ad-f8eb95001c47\") " pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" Jan 27 13:12:23 crc kubenswrapper[4786]: I0127 13:12:23.829885 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" Jan 27 13:12:24 crc kubenswrapper[4786]: I0127 13:12:24.231249 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp"] Jan 27 13:12:25 crc kubenswrapper[4786]: I0127 13:12:25.190900 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" event={"ID":"5bdd97be-6eee-4790-a0ad-f8eb95001c47","Type":"ContainerStarted","Data":"30ff5bcb3c8525d3ca3dacac650ef6307566e332d49d8e560a60998ad755f788"} Jan 27 13:12:25 crc kubenswrapper[4786]: I0127 13:12:25.191194 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" Jan 27 13:12:25 crc kubenswrapper[4786]: I0127 13:12:25.191204 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" event={"ID":"5bdd97be-6eee-4790-a0ad-f8eb95001c47","Type":"ContainerStarted","Data":"5dd873e997e0461c38ca16f6c50bfdc3dac3c39a11e769f815c3e01d869a7748"} Jan 27 13:12:25 crc kubenswrapper[4786]: I0127 13:12:25.195619 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" Jan 27 13:12:25 crc kubenswrapper[4786]: I0127 13:12:25.205561 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f579bbdcb-bwfzp" podStartSLOduration=4.205541689 podStartE2EDuration="4.205541689s" podCreationTimestamp="2026-01-27 13:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:12:25.204963343 +0000 UTC m=+328.415577462" watchObservedRunningTime="2026-01-27 13:12:25.205541689 +0000 UTC m=+328.416155808" Jan 27 13:12:38 crc kubenswrapper[4786]: I0127 13:12:38.393437 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57b9555869-dn8vt"] Jan 27 13:12:38 crc kubenswrapper[4786]: I0127 13:12:38.395054 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" podUID="753c4346-bc77-4a0b-9644-b2faea3f3229" containerName="controller-manager" containerID="cri-o://2c2991671e658ff78770847a8774fa744e9b97d102292f7ff8f62d8cbbae8108" gracePeriod=30 Jan 27 13:12:38 crc kubenswrapper[4786]: I0127 13:12:38.740271 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:38 crc kubenswrapper[4786]: I0127 13:12:38.909579 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckbr8\" (UniqueName: \"kubernetes.io/projected/753c4346-bc77-4a0b-9644-b2faea3f3229-kube-api-access-ckbr8\") pod \"753c4346-bc77-4a0b-9644-b2faea3f3229\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " Jan 27 13:12:38 crc kubenswrapper[4786]: I0127 13:12:38.909687 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-client-ca\") pod \"753c4346-bc77-4a0b-9644-b2faea3f3229\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " Jan 27 13:12:38 crc kubenswrapper[4786]: I0127 13:12:38.909716 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/753c4346-bc77-4a0b-9644-b2faea3f3229-serving-cert\") pod \"753c4346-bc77-4a0b-9644-b2faea3f3229\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " Jan 27 13:12:38 crc kubenswrapper[4786]: I0127 13:12:38.909758 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-proxy-ca-bundles\") pod \"753c4346-bc77-4a0b-9644-b2faea3f3229\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " Jan 27 13:12:38 crc kubenswrapper[4786]: I0127 13:12:38.909806 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-config\") pod \"753c4346-bc77-4a0b-9644-b2faea3f3229\" (UID: \"753c4346-bc77-4a0b-9644-b2faea3f3229\") " Jan 27 13:12:38 crc kubenswrapper[4786]: I0127 13:12:38.910582 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-client-ca" (OuterVolumeSpecName: "client-ca") pod "753c4346-bc77-4a0b-9644-b2faea3f3229" (UID: "753c4346-bc77-4a0b-9644-b2faea3f3229"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:12:38 crc kubenswrapper[4786]: I0127 13:12:38.910764 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-config" (OuterVolumeSpecName: "config") pod "753c4346-bc77-4a0b-9644-b2faea3f3229" (UID: "753c4346-bc77-4a0b-9644-b2faea3f3229"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:12:38 crc kubenswrapper[4786]: I0127 13:12:38.911305 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "753c4346-bc77-4a0b-9644-b2faea3f3229" (UID: "753c4346-bc77-4a0b-9644-b2faea3f3229"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:12:38 crc kubenswrapper[4786]: I0127 13:12:38.915378 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753c4346-bc77-4a0b-9644-b2faea3f3229-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "753c4346-bc77-4a0b-9644-b2faea3f3229" (UID: "753c4346-bc77-4a0b-9644-b2faea3f3229"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:12:38 crc kubenswrapper[4786]: I0127 13:12:38.916934 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/753c4346-bc77-4a0b-9644-b2faea3f3229-kube-api-access-ckbr8" (OuterVolumeSpecName: "kube-api-access-ckbr8") pod "753c4346-bc77-4a0b-9644-b2faea3f3229" (UID: "753c4346-bc77-4a0b-9644-b2faea3f3229"). InnerVolumeSpecName "kube-api-access-ckbr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.011510 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckbr8\" (UniqueName: \"kubernetes.io/projected/753c4346-bc77-4a0b-9644-b2faea3f3229-kube-api-access-ckbr8\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.012375 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.012419 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/753c4346-bc77-4a0b-9644-b2faea3f3229-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.012445 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.012470 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753c4346-bc77-4a0b-9644-b2faea3f3229-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.275535 4786 generic.go:334] "Generic (PLEG): container finished" podID="753c4346-bc77-4a0b-9644-b2faea3f3229" containerID="2c2991671e658ff78770847a8774fa744e9b97d102292f7ff8f62d8cbbae8108" exitCode=0 Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.275636 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" event={"ID":"753c4346-bc77-4a0b-9644-b2faea3f3229","Type":"ContainerDied","Data":"2c2991671e658ff78770847a8774fa744e9b97d102292f7ff8f62d8cbbae8108"} Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.275702 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.275738 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b9555869-dn8vt" event={"ID":"753c4346-bc77-4a0b-9644-b2faea3f3229","Type":"ContainerDied","Data":"7af8f102c921c802260a522e446f1168d167269d97f357d0fc9ac1eb7b93dac8"} Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.275770 4786 scope.go:117] "RemoveContainer" containerID="2c2991671e658ff78770847a8774fa744e9b97d102292f7ff8f62d8cbbae8108" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.306730 4786 scope.go:117] "RemoveContainer" containerID="2c2991671e658ff78770847a8774fa744e9b97d102292f7ff8f62d8cbbae8108" Jan 27 13:12:39 crc kubenswrapper[4786]: E0127 13:12:39.307369 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c2991671e658ff78770847a8774fa744e9b97d102292f7ff8f62d8cbbae8108\": container with ID starting with 2c2991671e658ff78770847a8774fa744e9b97d102292f7ff8f62d8cbbae8108 not found: ID does not exist" containerID="2c2991671e658ff78770847a8774fa744e9b97d102292f7ff8f62d8cbbae8108" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.307432 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c2991671e658ff78770847a8774fa744e9b97d102292f7ff8f62d8cbbae8108"} err="failed to get container status \"2c2991671e658ff78770847a8774fa744e9b97d102292f7ff8f62d8cbbae8108\": rpc error: code = NotFound desc = could not find container \"2c2991671e658ff78770847a8774fa744e9b97d102292f7ff8f62d8cbbae8108\": container with ID starting with 2c2991671e658ff78770847a8774fa744e9b97d102292f7ff8f62d8cbbae8108 not found: ID does not exist" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.315778 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57b9555869-dn8vt"] Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.322872 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-57b9555869-dn8vt"] Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.475479 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="753c4346-bc77-4a0b-9644-b2faea3f3229" path="/var/lib/kubelet/pods/753c4346-bc77-4a0b-9644-b2faea3f3229/volumes" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.518793 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78b5d66777-xhvwg"] Jan 27 13:12:39 crc kubenswrapper[4786]: E0127 13:12:39.519251 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="753c4346-bc77-4a0b-9644-b2faea3f3229" containerName="controller-manager" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.519337 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="753c4346-bc77-4a0b-9644-b2faea3f3229" containerName="controller-manager" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.519499 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="753c4346-bc77-4a0b-9644-b2faea3f3229" containerName="controller-manager" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.519936 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.524196 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.526637 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.526676 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.526852 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.527234 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.527566 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.530343 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78b5d66777-xhvwg"] Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.569157 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.621308 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a860198-23ac-4e13-b770-40b7aa77c0ad-serving-cert\") pod \"controller-manager-78b5d66777-xhvwg\" (UID: \"6a860198-23ac-4e13-b770-40b7aa77c0ad\") " pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.621376 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a860198-23ac-4e13-b770-40b7aa77c0ad-client-ca\") pod \"controller-manager-78b5d66777-xhvwg\" (UID: \"6a860198-23ac-4e13-b770-40b7aa77c0ad\") " pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.621440 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a860198-23ac-4e13-b770-40b7aa77c0ad-config\") pod \"controller-manager-78b5d66777-xhvwg\" (UID: \"6a860198-23ac-4e13-b770-40b7aa77c0ad\") " pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.621465 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrrzd\" (UniqueName: \"kubernetes.io/projected/6a860198-23ac-4e13-b770-40b7aa77c0ad-kube-api-access-lrrzd\") pod \"controller-manager-78b5d66777-xhvwg\" (UID: \"6a860198-23ac-4e13-b770-40b7aa77c0ad\") " pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.621485 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a860198-23ac-4e13-b770-40b7aa77c0ad-proxy-ca-bundles\") pod \"controller-manager-78b5d66777-xhvwg\" (UID: \"6a860198-23ac-4e13-b770-40b7aa77c0ad\") " pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.722499 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a860198-23ac-4e13-b770-40b7aa77c0ad-client-ca\") pod \"controller-manager-78b5d66777-xhvwg\" (UID: \"6a860198-23ac-4e13-b770-40b7aa77c0ad\") " pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.722675 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a860198-23ac-4e13-b770-40b7aa77c0ad-config\") pod \"controller-manager-78b5d66777-xhvwg\" (UID: \"6a860198-23ac-4e13-b770-40b7aa77c0ad\") " pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.722729 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrrzd\" (UniqueName: \"kubernetes.io/projected/6a860198-23ac-4e13-b770-40b7aa77c0ad-kube-api-access-lrrzd\") pod \"controller-manager-78b5d66777-xhvwg\" (UID: \"6a860198-23ac-4e13-b770-40b7aa77c0ad\") " pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.722753 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a860198-23ac-4e13-b770-40b7aa77c0ad-proxy-ca-bundles\") pod \"controller-manager-78b5d66777-xhvwg\" (UID: \"6a860198-23ac-4e13-b770-40b7aa77c0ad\") " pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.722863 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a860198-23ac-4e13-b770-40b7aa77c0ad-serving-cert\") pod \"controller-manager-78b5d66777-xhvwg\" (UID: \"6a860198-23ac-4e13-b770-40b7aa77c0ad\") " pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.723549 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a860198-23ac-4e13-b770-40b7aa77c0ad-client-ca\") pod \"controller-manager-78b5d66777-xhvwg\" (UID: \"6a860198-23ac-4e13-b770-40b7aa77c0ad\") " pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.723708 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a860198-23ac-4e13-b770-40b7aa77c0ad-proxy-ca-bundles\") pod \"controller-manager-78b5d66777-xhvwg\" (UID: \"6a860198-23ac-4e13-b770-40b7aa77c0ad\") " pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.724451 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a860198-23ac-4e13-b770-40b7aa77c0ad-config\") pod \"controller-manager-78b5d66777-xhvwg\" (UID: \"6a860198-23ac-4e13-b770-40b7aa77c0ad\") " pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.727952 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a860198-23ac-4e13-b770-40b7aa77c0ad-serving-cert\") pod \"controller-manager-78b5d66777-xhvwg\" (UID: \"6a860198-23ac-4e13-b770-40b7aa77c0ad\") " pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.745017 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrrzd\" (UniqueName: \"kubernetes.io/projected/6a860198-23ac-4e13-b770-40b7aa77c0ad-kube-api-access-lrrzd\") pod \"controller-manager-78b5d66777-xhvwg\" (UID: \"6a860198-23ac-4e13-b770-40b7aa77c0ad\") " pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:39 crc kubenswrapper[4786]: I0127 13:12:39.881923 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:40 crc kubenswrapper[4786]: I0127 13:12:40.093802 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78b5d66777-xhvwg"] Jan 27 13:12:40 crc kubenswrapper[4786]: I0127 13:12:40.283536 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" event={"ID":"6a860198-23ac-4e13-b770-40b7aa77c0ad","Type":"ContainerStarted","Data":"78368c40febe457fbfa2ccab471b21d1a8f8d2040bedb28fa9dfba941fc08730"} Jan 27 13:12:40 crc kubenswrapper[4786]: I0127 13:12:40.283587 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" event={"ID":"6a860198-23ac-4e13-b770-40b7aa77c0ad","Type":"ContainerStarted","Data":"7d211c0a25b8f01ad2e8bda3ec2fa98655a1bab670a1fe0d55c58a03ef2f5782"} Jan 27 13:12:40 crc kubenswrapper[4786]: I0127 13:12:40.283813 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:12:41 crc kubenswrapper[4786]: I0127 13:12:41.064988 4786 patch_prober.go:28] interesting pod/controller-manager-78b5d66777-xhvwg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 27 13:12:41 crc kubenswrapper[4786]: I0127 13:12:41.066576 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" podUID="6a860198-23ac-4e13-b770-40b7aa77c0ad" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 27 13:12:41 crc kubenswrapper[4786]: I0127 13:12:41.091429 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" podStartSLOduration=3.091405323 podStartE2EDuration="3.091405323s" podCreationTimestamp="2026-01-27 13:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:12:41.088229777 +0000 UTC m=+344.298843896" watchObservedRunningTime="2026-01-27 13:12:41.091405323 +0000 UTC m=+344.302019462" Jan 27 13:12:41 crc kubenswrapper[4786]: I0127 13:12:41.291548 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78b5d66777-xhvwg" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.052994 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9tfqt"] Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.054362 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.070765 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9tfqt"] Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.249047 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.249424 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a838142-e43d-4551-957e-fed8eb15ac8f-registry-certificates\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.249446 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a838142-e43d-4551-957e-fed8eb15ac8f-registry-tls\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.249464 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a838142-e43d-4551-957e-fed8eb15ac8f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.249520 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a838142-e43d-4551-957e-fed8eb15ac8f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.249740 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tbv2\" (UniqueName: \"kubernetes.io/projected/1a838142-e43d-4551-957e-fed8eb15ac8f-kube-api-access-4tbv2\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.249866 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a838142-e43d-4551-957e-fed8eb15ac8f-bound-sa-token\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.249948 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a838142-e43d-4551-957e-fed8eb15ac8f-trusted-ca\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.272318 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.370248 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a838142-e43d-4551-957e-fed8eb15ac8f-trusted-ca\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.370337 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a838142-e43d-4551-957e-fed8eb15ac8f-registry-certificates\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.370361 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a838142-e43d-4551-957e-fed8eb15ac8f-registry-tls\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.370384 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a838142-e43d-4551-957e-fed8eb15ac8f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.370416 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a838142-e43d-4551-957e-fed8eb15ac8f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.370466 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tbv2\" (UniqueName: \"kubernetes.io/projected/1a838142-e43d-4551-957e-fed8eb15ac8f-kube-api-access-4tbv2\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.370498 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a838142-e43d-4551-957e-fed8eb15ac8f-bound-sa-token\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.371149 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a838142-e43d-4551-957e-fed8eb15ac8f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.372417 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a838142-e43d-4551-957e-fed8eb15ac8f-registry-certificates\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.373227 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a838142-e43d-4551-957e-fed8eb15ac8f-trusted-ca\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.377133 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a838142-e43d-4551-957e-fed8eb15ac8f-registry-tls\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.382745 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a838142-e43d-4551-957e-fed8eb15ac8f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.389509 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a838142-e43d-4551-957e-fed8eb15ac8f-bound-sa-token\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.402244 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tbv2\" (UniqueName: \"kubernetes.io/projected/1a838142-e43d-4551-957e-fed8eb15ac8f-kube-api-access-4tbv2\") pod \"image-registry-66df7c8f76-9tfqt\" (UID: \"1a838142-e43d-4551-957e-fed8eb15ac8f\") " pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:06 crc kubenswrapper[4786]: I0127 13:13:06.670963 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:07 crc kubenswrapper[4786]: I0127 13:13:07.143215 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9tfqt"] Jan 27 13:13:07 crc kubenswrapper[4786]: I0127 13:13:07.441140 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" event={"ID":"1a838142-e43d-4551-957e-fed8eb15ac8f","Type":"ContainerStarted","Data":"7d1245bcf2764a560cba8bddb29423c3cf3bbfa7a009cb840a936fe91eedf79f"} Jan 27 13:13:07 crc kubenswrapper[4786]: I0127 13:13:07.441190 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" event={"ID":"1a838142-e43d-4551-957e-fed8eb15ac8f","Type":"ContainerStarted","Data":"fbfe9f4d1f2c87ada8d8f44a544385c3b1e085b90e0c308d46bb1980c0c539fa"} Jan 27 13:13:07 crc kubenswrapper[4786]: I0127 13:13:07.441325 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:07 crc kubenswrapper[4786]: I0127 13:13:07.462478 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" podStartSLOduration=1.462462121 podStartE2EDuration="1.462462121s" podCreationTimestamp="2026-01-27 13:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:13:07.461457524 +0000 UTC m=+370.672071643" watchObservedRunningTime="2026-01-27 13:13:07.462462121 +0000 UTC m=+370.673076240" Jan 27 13:13:09 crc kubenswrapper[4786]: I0127 13:13:09.533290 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:13:09 crc kubenswrapper[4786]: I0127 13:13:09.533815 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.211330 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9lbn2"] Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.212034 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9lbn2" podUID="dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" containerName="registry-server" containerID="cri-o://ecf32e051bedb06636bd8c57fe31bbf5d3ec99728b0cef8b46980ce1fcd29403" gracePeriod=30 Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.226760 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sk9mj"] Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.227120 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sk9mj" podUID="f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" containerName="registry-server" containerID="cri-o://eebd3bb3424aaf829248a618dbcb886dd9549cb2abbc375300bfefa682267c6c" gracePeriod=30 Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.239656 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ndd4n"] Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.239919 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" podUID="294ae02f-293d-4db8-9a9f-6d3878c8ccf9" containerName="marketplace-operator" containerID="cri-o://ab14ac4bd6bddefd8577218400d2fc3c74e23314875c9c8a5b0b4b9a1d9120cd" gracePeriod=30 Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.253960 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2js8d"] Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.254251 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2js8d" podUID="9332175b-3747-40d6-892d-1c126a05b0c2" containerName="registry-server" containerID="cri-o://7fd1bc78e82ecbc935e08459b986530b5a3b8b93bc5557e0bc9b754a688602f6" gracePeriod=30 Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.270207 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f44tb"] Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.270872 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f44tb" podUID="35ceca1f-028f-4e16-8c4c-1fe9094598c8" containerName="registry-server" containerID="cri-o://d6198f8824d4ddac19879c411442b0b65e682b3d3b470c70bc136a9e1a7b0a94" gracePeriod=30 Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.275686 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-245sv"] Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.276846 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-245sv" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.279382 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-245sv"] Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.398235 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hd6c\" (UniqueName: \"kubernetes.io/projected/d9ebf2ac-8724-4914-be74-cae8c48760d8-kube-api-access-7hd6c\") pod \"marketplace-operator-79b997595-245sv\" (UID: \"d9ebf2ac-8724-4914-be74-cae8c48760d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-245sv" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.398305 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d9ebf2ac-8724-4914-be74-cae8c48760d8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-245sv\" (UID: \"d9ebf2ac-8724-4914-be74-cae8c48760d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-245sv" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.398341 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9ebf2ac-8724-4914-be74-cae8c48760d8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-245sv\" (UID: \"d9ebf2ac-8724-4914-be74-cae8c48760d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-245sv" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.486196 4786 generic.go:334] "Generic (PLEG): container finished" podID="f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" containerID="eebd3bb3424aaf829248a618dbcb886dd9549cb2abbc375300bfefa682267c6c" exitCode=0 Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.486264 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sk9mj" event={"ID":"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa","Type":"ContainerDied","Data":"eebd3bb3424aaf829248a618dbcb886dd9549cb2abbc375300bfefa682267c6c"} Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.489175 4786 generic.go:334] "Generic (PLEG): container finished" podID="9332175b-3747-40d6-892d-1c126a05b0c2" containerID="7fd1bc78e82ecbc935e08459b986530b5a3b8b93bc5557e0bc9b754a688602f6" exitCode=0 Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.489308 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2js8d" event={"ID":"9332175b-3747-40d6-892d-1c126a05b0c2","Type":"ContainerDied","Data":"7fd1bc78e82ecbc935e08459b986530b5a3b8b93bc5557e0bc9b754a688602f6"} Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.492306 4786 generic.go:334] "Generic (PLEG): container finished" podID="35ceca1f-028f-4e16-8c4c-1fe9094598c8" containerID="d6198f8824d4ddac19879c411442b0b65e682b3d3b470c70bc136a9e1a7b0a94" exitCode=0 Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.492572 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f44tb" event={"ID":"35ceca1f-028f-4e16-8c4c-1fe9094598c8","Type":"ContainerDied","Data":"d6198f8824d4ddac19879c411442b0b65e682b3d3b470c70bc136a9e1a7b0a94"} Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.494204 4786 generic.go:334] "Generic (PLEG): container finished" podID="294ae02f-293d-4db8-9a9f-6d3878c8ccf9" containerID="ab14ac4bd6bddefd8577218400d2fc3c74e23314875c9c8a5b0b4b9a1d9120cd" exitCode=0 Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.494299 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" event={"ID":"294ae02f-293d-4db8-9a9f-6d3878c8ccf9","Type":"ContainerDied","Data":"ab14ac4bd6bddefd8577218400d2fc3c74e23314875c9c8a5b0b4b9a1d9120cd"} Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.494351 4786 scope.go:117] "RemoveContainer" containerID="134f9aafba7557b0fffd00fb582a5deae1327822d1ae4b6a2899a49c060abdd3" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.496747 4786 generic.go:334] "Generic (PLEG): container finished" podID="dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" containerID="ecf32e051bedb06636bd8c57fe31bbf5d3ec99728b0cef8b46980ce1fcd29403" exitCode=0 Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.496809 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lbn2" event={"ID":"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a","Type":"ContainerDied","Data":"ecf32e051bedb06636bd8c57fe31bbf5d3ec99728b0cef8b46980ce1fcd29403"} Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.500073 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hd6c\" (UniqueName: \"kubernetes.io/projected/d9ebf2ac-8724-4914-be74-cae8c48760d8-kube-api-access-7hd6c\") pod \"marketplace-operator-79b997595-245sv\" (UID: \"d9ebf2ac-8724-4914-be74-cae8c48760d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-245sv" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.500146 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d9ebf2ac-8724-4914-be74-cae8c48760d8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-245sv\" (UID: \"d9ebf2ac-8724-4914-be74-cae8c48760d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-245sv" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.500211 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9ebf2ac-8724-4914-be74-cae8c48760d8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-245sv\" (UID: \"d9ebf2ac-8724-4914-be74-cae8c48760d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-245sv" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.502554 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9ebf2ac-8724-4914-be74-cae8c48760d8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-245sv\" (UID: \"d9ebf2ac-8724-4914-be74-cae8c48760d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-245sv" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.510887 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d9ebf2ac-8724-4914-be74-cae8c48760d8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-245sv\" (UID: \"d9ebf2ac-8724-4914-be74-cae8c48760d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-245sv" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.516238 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hd6c\" (UniqueName: \"kubernetes.io/projected/d9ebf2ac-8724-4914-be74-cae8c48760d8-kube-api-access-7hd6c\") pod \"marketplace-operator-79b997595-245sv\" (UID: \"d9ebf2ac-8724-4914-be74-cae8c48760d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-245sv" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.735577 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-245sv" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.742641 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.858542 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.903971 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-catalog-content\") pod \"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa\" (UID: \"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa\") " Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.904072 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-utilities\") pod \"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa\" (UID: \"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa\") " Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.905457 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-utilities" (OuterVolumeSpecName: "utilities") pod "f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" (UID: "f61b3f86-bdc9-44a2-a5a8-d9895393ddaa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.906089 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tq9s\" (UniqueName: \"kubernetes.io/projected/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-kube-api-access-9tq9s\") pod \"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa\" (UID: \"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa\") " Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.906440 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.906901 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.908513 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.913101 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-kube-api-access-9tq9s" (OuterVolumeSpecName: "kube-api-access-9tq9s") pod "f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" (UID: "f61b3f86-bdc9-44a2-a5a8-d9895393ddaa"). InnerVolumeSpecName "kube-api-access-9tq9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.925849 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:13:15 crc kubenswrapper[4786]: I0127 13:13:15.968150 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" (UID: "f61b3f86-bdc9-44a2-a5a8-d9895393ddaa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.007001 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrz4x\" (UniqueName: \"kubernetes.io/projected/9332175b-3747-40d6-892d-1c126a05b0c2-kube-api-access-mrz4x\") pod \"9332175b-3747-40d6-892d-1c126a05b0c2\" (UID: \"9332175b-3747-40d6-892d-1c126a05b0c2\") " Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.007046 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-marketplace-operator-metrics\") pod \"294ae02f-293d-4db8-9a9f-6d3878c8ccf9\" (UID: \"294ae02f-293d-4db8-9a9f-6d3878c8ccf9\") " Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.007079 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ceca1f-028f-4e16-8c4c-1fe9094598c8-utilities\") pod \"35ceca1f-028f-4e16-8c4c-1fe9094598c8\" (UID: \"35ceca1f-028f-4e16-8c4c-1fe9094598c8\") " Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.007128 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9332175b-3747-40d6-892d-1c126a05b0c2-utilities\") pod \"9332175b-3747-40d6-892d-1c126a05b0c2\" (UID: \"9332175b-3747-40d6-892d-1c126a05b0c2\") " Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.007153 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrcx9\" (UniqueName: \"kubernetes.io/projected/35ceca1f-028f-4e16-8c4c-1fe9094598c8-kube-api-access-qrcx9\") pod \"35ceca1f-028f-4e16-8c4c-1fe9094598c8\" (UID: \"35ceca1f-028f-4e16-8c4c-1fe9094598c8\") " Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.007174 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-catalog-content\") pod \"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a\" (UID: \"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a\") " Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.007188 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ceca1f-028f-4e16-8c4c-1fe9094598c8-catalog-content\") pod \"35ceca1f-028f-4e16-8c4c-1fe9094598c8\" (UID: \"35ceca1f-028f-4e16-8c4c-1fe9094598c8\") " Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.007206 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-marketplace-trusted-ca\") pod \"294ae02f-293d-4db8-9a9f-6d3878c8ccf9\" (UID: \"294ae02f-293d-4db8-9a9f-6d3878c8ccf9\") " Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.007234 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-utilities\") pod \"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a\" (UID: \"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a\") " Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.007261 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk4bz\" (UniqueName: \"kubernetes.io/projected/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-kube-api-access-rk4bz\") pod \"294ae02f-293d-4db8-9a9f-6d3878c8ccf9\" (UID: \"294ae02f-293d-4db8-9a9f-6d3878c8ccf9\") " Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.007281 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc465\" (UniqueName: \"kubernetes.io/projected/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-kube-api-access-pc465\") pod \"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a\" (UID: \"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a\") " Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.007328 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9332175b-3747-40d6-892d-1c126a05b0c2-catalog-content\") pod \"9332175b-3747-40d6-892d-1c126a05b0c2\" (UID: \"9332175b-3747-40d6-892d-1c126a05b0c2\") " Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.007550 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tq9s\" (UniqueName: \"kubernetes.io/projected/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-kube-api-access-9tq9s\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.007562 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.007819 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35ceca1f-028f-4e16-8c4c-1fe9094598c8-utilities" (OuterVolumeSpecName: "utilities") pod "35ceca1f-028f-4e16-8c4c-1fe9094598c8" (UID: "35ceca1f-028f-4e16-8c4c-1fe9094598c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.008409 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-utilities" (OuterVolumeSpecName: "utilities") pod "dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" (UID: "dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.008708 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "294ae02f-293d-4db8-9a9f-6d3878c8ccf9" (UID: "294ae02f-293d-4db8-9a9f-6d3878c8ccf9"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.009049 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9332175b-3747-40d6-892d-1c126a05b0c2-utilities" (OuterVolumeSpecName: "utilities") pod "9332175b-3747-40d6-892d-1c126a05b0c2" (UID: "9332175b-3747-40d6-892d-1c126a05b0c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.011175 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35ceca1f-028f-4e16-8c4c-1fe9094598c8-kube-api-access-qrcx9" (OuterVolumeSpecName: "kube-api-access-qrcx9") pod "35ceca1f-028f-4e16-8c4c-1fe9094598c8" (UID: "35ceca1f-028f-4e16-8c4c-1fe9094598c8"). InnerVolumeSpecName "kube-api-access-qrcx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.011429 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "294ae02f-293d-4db8-9a9f-6d3878c8ccf9" (UID: "294ae02f-293d-4db8-9a9f-6d3878c8ccf9"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.011575 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-kube-api-access-rk4bz" (OuterVolumeSpecName: "kube-api-access-rk4bz") pod "294ae02f-293d-4db8-9a9f-6d3878c8ccf9" (UID: "294ae02f-293d-4db8-9a9f-6d3878c8ccf9"). InnerVolumeSpecName "kube-api-access-rk4bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.012116 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-kube-api-access-pc465" (OuterVolumeSpecName: "kube-api-access-pc465") pod "dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" (UID: "dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a"). InnerVolumeSpecName "kube-api-access-pc465". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.021716 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9332175b-3747-40d6-892d-1c126a05b0c2-kube-api-access-mrz4x" (OuterVolumeSpecName: "kube-api-access-mrz4x") pod "9332175b-3747-40d6-892d-1c126a05b0c2" (UID: "9332175b-3747-40d6-892d-1c126a05b0c2"). InnerVolumeSpecName "kube-api-access-mrz4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.031411 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9332175b-3747-40d6-892d-1c126a05b0c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9332175b-3747-40d6-892d-1c126a05b0c2" (UID: "9332175b-3747-40d6-892d-1c126a05b0c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.057816 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" (UID: "dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.108456 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9332175b-3747-40d6-892d-1c126a05b0c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.108487 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrcx9\" (UniqueName: \"kubernetes.io/projected/35ceca1f-028f-4e16-8c4c-1fe9094598c8-kube-api-access-qrcx9\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.108497 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.108508 4786 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.108518 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.108526 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk4bz\" (UniqueName: \"kubernetes.io/projected/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-kube-api-access-rk4bz\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.108534 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc465\" (UniqueName: \"kubernetes.io/projected/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a-kube-api-access-pc465\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.108542 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9332175b-3747-40d6-892d-1c126a05b0c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.108552 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrz4x\" (UniqueName: \"kubernetes.io/projected/9332175b-3747-40d6-892d-1c126a05b0c2-kube-api-access-mrz4x\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.108560 4786 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/294ae02f-293d-4db8-9a9f-6d3878c8ccf9-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.108568 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ceca1f-028f-4e16-8c4c-1fe9094598c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.127035 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35ceca1f-028f-4e16-8c4c-1fe9094598c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35ceca1f-028f-4e16-8c4c-1fe9094598c8" (UID: "35ceca1f-028f-4e16-8c4c-1fe9094598c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.200585 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-245sv"] Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.209319 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ceca1f-028f-4e16-8c4c-1fe9094598c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.506122 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f44tb" event={"ID":"35ceca1f-028f-4e16-8c4c-1fe9094598c8","Type":"ContainerDied","Data":"e7dc1f88ce9f17010641ca2cd3eed47cfe0904fce64393209c8e5dba3178699a"} Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.506183 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f44tb" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.506233 4786 scope.go:117] "RemoveContainer" containerID="d6198f8824d4ddac19879c411442b0b65e682b3d3b470c70bc136a9e1a7b0a94" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.508015 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-245sv" event={"ID":"d9ebf2ac-8724-4914-be74-cae8c48760d8","Type":"ContainerStarted","Data":"5f2f4985dfcf9c6e3c592f221f7b10edbb7e2a0080c58732665688e006f9bfc3"} Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.508066 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-245sv" event={"ID":"d9ebf2ac-8724-4914-be74-cae8c48760d8","Type":"ContainerStarted","Data":"ac14e20b3a110e9b596e70a776e36f369e1b6c561ccb1891aa3a7b6204765df2"} Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.508085 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-245sv" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.510477 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.510498 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ndd4n" event={"ID":"294ae02f-293d-4db8-9a9f-6d3878c8ccf9","Type":"ContainerDied","Data":"ca9f9eda2f6fdfd30b84eec83c329291324add7c7587bc058f038500a3373ece"} Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.511003 4786 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-245sv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" start-of-body= Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.511079 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-245sv" podUID="d9ebf2ac-8724-4914-be74-cae8c48760d8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.512727 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9lbn2" event={"ID":"dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a","Type":"ContainerDied","Data":"19bea142f35f5555677c0091beec7c95a65816312437eb41ef285ad33e458194"} Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.513104 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9lbn2" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.518636 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sk9mj" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.518580 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sk9mj" event={"ID":"f61b3f86-bdc9-44a2-a5a8-d9895393ddaa","Type":"ContainerDied","Data":"33a3c9c01c9026134810c43756681d8fc8b192b15c7ffc08be083a5a65bd792d"} Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.531230 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2js8d" event={"ID":"9332175b-3747-40d6-892d-1c126a05b0c2","Type":"ContainerDied","Data":"c4e42739448583e4dd78bc479edce44169eaa9951d2dc5323d3e049348ed8dba"} Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.531330 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2js8d" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.531382 4786 scope.go:117] "RemoveContainer" containerID="b25a995651f029347f819bea1a078477c610dc977243d7722d8dda6127b47e6e" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.537151 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-245sv" podStartSLOduration=1.537133151 podStartE2EDuration="1.537133151s" podCreationTimestamp="2026-01-27 13:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:13:16.532478326 +0000 UTC m=+379.743092465" watchObservedRunningTime="2026-01-27 13:13:16.537133151 +0000 UTC m=+379.747747270" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.580801 4786 scope.go:117] "RemoveContainer" containerID="3efabdb6d8a89f5532693df7f7f81e9ac055dcedabca2538e148a8652e1b7e93" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.583230 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ndd4n"] Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.587271 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ndd4n"] Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.599267 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9lbn2"] Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.609766 4786 scope.go:117] "RemoveContainer" containerID="ab14ac4bd6bddefd8577218400d2fc3c74e23314875c9c8a5b0b4b9a1d9120cd" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.613379 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9lbn2"] Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.618575 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f44tb"] Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.622283 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f44tb"] Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.628635 4786 scope.go:117] "RemoveContainer" containerID="ecf32e051bedb06636bd8c57fe31bbf5d3ec99728b0cef8b46980ce1fcd29403" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.631396 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sk9mj"] Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.636733 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sk9mj"] Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.641109 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2js8d"] Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.643530 4786 scope.go:117] "RemoveContainer" containerID="44b0930588b4c5f24efdef597b715b9111272f27d04936b8baebf07702619f6a" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.645319 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2js8d"] Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.659470 4786 scope.go:117] "RemoveContainer" containerID="e35dea6d7c1a4193324a1a86efe987d57fdf93aa7f50bfdfbdf8cf688eac24d5" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.681326 4786 scope.go:117] "RemoveContainer" containerID="eebd3bb3424aaf829248a618dbcb886dd9549cb2abbc375300bfefa682267c6c" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.699977 4786 scope.go:117] "RemoveContainer" containerID="e5ebd0e5ac5b052dd5b59ff66941652aaa109092a71debbf25c6ed38d12e1a6d" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.714923 4786 scope.go:117] "RemoveContainer" containerID="09033df3b85ae61565835703d1efae15f3d622d48cbcf9b3d5f1c5f6e0d75b1c" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.726569 4786 scope.go:117] "RemoveContainer" containerID="7fd1bc78e82ecbc935e08459b986530b5a3b8b93bc5557e0bc9b754a688602f6" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.742588 4786 scope.go:117] "RemoveContainer" containerID="fadecf1a5188f682562d22905957a6c92bbd8ae7f9b01574da3878481103870d" Jan 27 13:13:16 crc kubenswrapper[4786]: I0127 13:13:16.757325 4786 scope.go:117] "RemoveContainer" containerID="a79bad1016e1610bf76fe702871c6688f8ded8472ae11401cb0c9654938f4c29" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.471494 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="294ae02f-293d-4db8-9a9f-6d3878c8ccf9" path="/var/lib/kubelet/pods/294ae02f-293d-4db8-9a9f-6d3878c8ccf9/volumes" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.472033 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35ceca1f-028f-4e16-8c4c-1fe9094598c8" path="/var/lib/kubelet/pods/35ceca1f-028f-4e16-8c4c-1fe9094598c8/volumes" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.472723 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9332175b-3747-40d6-892d-1c126a05b0c2" path="/var/lib/kubelet/pods/9332175b-3747-40d6-892d-1c126a05b0c2/volumes" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.473346 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" path="/var/lib/kubelet/pods/dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a/volumes" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.473986 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" path="/var/lib/kubelet/pods/f61b3f86-bdc9-44a2-a5a8-d9895393ddaa/volumes" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.543166 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-245sv" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831454 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xvj57"] Jan 27 13:13:17 crc kubenswrapper[4786]: E0127 13:13:17.831650 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9332175b-3747-40d6-892d-1c126a05b0c2" containerName="extract-content" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831662 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9332175b-3747-40d6-892d-1c126a05b0c2" containerName="extract-content" Jan 27 13:13:17 crc kubenswrapper[4786]: E0127 13:13:17.831670 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" containerName="extract-content" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831676 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" containerName="extract-content" Jan 27 13:13:17 crc kubenswrapper[4786]: E0127 13:13:17.831683 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35ceca1f-028f-4e16-8c4c-1fe9094598c8" containerName="extract-utilities" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831689 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ceca1f-028f-4e16-8c4c-1fe9094598c8" containerName="extract-utilities" Jan 27 13:13:17 crc kubenswrapper[4786]: E0127 13:13:17.831696 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35ceca1f-028f-4e16-8c4c-1fe9094598c8" containerName="extract-content" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831702 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ceca1f-028f-4e16-8c4c-1fe9094598c8" containerName="extract-content" Jan 27 13:13:17 crc kubenswrapper[4786]: E0127 13:13:17.831713 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9332175b-3747-40d6-892d-1c126a05b0c2" containerName="registry-server" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831719 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9332175b-3747-40d6-892d-1c126a05b0c2" containerName="registry-server" Jan 27 13:13:17 crc kubenswrapper[4786]: E0127 13:13:17.831729 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="294ae02f-293d-4db8-9a9f-6d3878c8ccf9" containerName="marketplace-operator" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831735 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="294ae02f-293d-4db8-9a9f-6d3878c8ccf9" containerName="marketplace-operator" Jan 27 13:13:17 crc kubenswrapper[4786]: E0127 13:13:17.831741 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35ceca1f-028f-4e16-8c4c-1fe9094598c8" containerName="registry-server" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831746 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ceca1f-028f-4e16-8c4c-1fe9094598c8" containerName="registry-server" Jan 27 13:13:17 crc kubenswrapper[4786]: E0127 13:13:17.831752 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" containerName="registry-server" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831758 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" containerName="registry-server" Jan 27 13:13:17 crc kubenswrapper[4786]: E0127 13:13:17.831765 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" containerName="extract-utilities" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831770 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" containerName="extract-utilities" Jan 27 13:13:17 crc kubenswrapper[4786]: E0127 13:13:17.831776 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" containerName="extract-utilities" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831781 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" containerName="extract-utilities" Jan 27 13:13:17 crc kubenswrapper[4786]: E0127 13:13:17.831789 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" containerName="registry-server" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831794 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" containerName="registry-server" Jan 27 13:13:17 crc kubenswrapper[4786]: E0127 13:13:17.831802 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9332175b-3747-40d6-892d-1c126a05b0c2" containerName="extract-utilities" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831808 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9332175b-3747-40d6-892d-1c126a05b0c2" containerName="extract-utilities" Jan 27 13:13:17 crc kubenswrapper[4786]: E0127 13:13:17.831816 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" containerName="extract-content" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831821 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" containerName="extract-content" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831902 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f61b3f86-bdc9-44a2-a5a8-d9895393ddaa" containerName="registry-server" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831912 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="9332175b-3747-40d6-892d-1c126a05b0c2" containerName="registry-server" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831919 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="294ae02f-293d-4db8-9a9f-6d3878c8ccf9" containerName="marketplace-operator" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831928 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="294ae02f-293d-4db8-9a9f-6d3878c8ccf9" containerName="marketplace-operator" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831936 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfb0b452-d5ee-4c4d-9cc6-d0b962cba56a" containerName="registry-server" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.831944 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="35ceca1f-028f-4e16-8c4c-1fe9094598c8" containerName="registry-server" Jan 27 13:13:17 crc kubenswrapper[4786]: E0127 13:13:17.832029 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="294ae02f-293d-4db8-9a9f-6d3878c8ccf9" containerName="marketplace-operator" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.832036 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="294ae02f-293d-4db8-9a9f-6d3878c8ccf9" containerName="marketplace-operator" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.832806 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvj57" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.835655 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.843993 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvj57"] Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.936934 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1abdef6-52c6-4c82-b750-a46910bbb108-catalog-content\") pod \"redhat-operators-xvj57\" (UID: \"c1abdef6-52c6-4c82-b750-a46910bbb108\") " pod="openshift-marketplace/redhat-operators-xvj57" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.937001 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhm45\" (UniqueName: \"kubernetes.io/projected/c1abdef6-52c6-4c82-b750-a46910bbb108-kube-api-access-lhm45\") pod \"redhat-operators-xvj57\" (UID: \"c1abdef6-52c6-4c82-b750-a46910bbb108\") " pod="openshift-marketplace/redhat-operators-xvj57" Jan 27 13:13:17 crc kubenswrapper[4786]: I0127 13:13:17.937100 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1abdef6-52c6-4c82-b750-a46910bbb108-utilities\") pod \"redhat-operators-xvj57\" (UID: \"c1abdef6-52c6-4c82-b750-a46910bbb108\") " pod="openshift-marketplace/redhat-operators-xvj57" Jan 27 13:13:18 crc kubenswrapper[4786]: I0127 13:13:18.038380 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1abdef6-52c6-4c82-b750-a46910bbb108-utilities\") pod \"redhat-operators-xvj57\" (UID: \"c1abdef6-52c6-4c82-b750-a46910bbb108\") " pod="openshift-marketplace/redhat-operators-xvj57" Jan 27 13:13:18 crc kubenswrapper[4786]: I0127 13:13:18.038468 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1abdef6-52c6-4c82-b750-a46910bbb108-catalog-content\") pod \"redhat-operators-xvj57\" (UID: \"c1abdef6-52c6-4c82-b750-a46910bbb108\") " pod="openshift-marketplace/redhat-operators-xvj57" Jan 27 13:13:18 crc kubenswrapper[4786]: I0127 13:13:18.038516 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhm45\" (UniqueName: \"kubernetes.io/projected/c1abdef6-52c6-4c82-b750-a46910bbb108-kube-api-access-lhm45\") pod \"redhat-operators-xvj57\" (UID: \"c1abdef6-52c6-4c82-b750-a46910bbb108\") " pod="openshift-marketplace/redhat-operators-xvj57" Jan 27 13:13:18 crc kubenswrapper[4786]: I0127 13:13:18.038912 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1abdef6-52c6-4c82-b750-a46910bbb108-utilities\") pod \"redhat-operators-xvj57\" (UID: \"c1abdef6-52c6-4c82-b750-a46910bbb108\") " pod="openshift-marketplace/redhat-operators-xvj57" Jan 27 13:13:18 crc kubenswrapper[4786]: I0127 13:13:18.040742 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1abdef6-52c6-4c82-b750-a46910bbb108-catalog-content\") pod \"redhat-operators-xvj57\" (UID: \"c1abdef6-52c6-4c82-b750-a46910bbb108\") " pod="openshift-marketplace/redhat-operators-xvj57" Jan 27 13:13:18 crc kubenswrapper[4786]: I0127 13:13:18.058487 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhm45\" (UniqueName: \"kubernetes.io/projected/c1abdef6-52c6-4c82-b750-a46910bbb108-kube-api-access-lhm45\") pod \"redhat-operators-xvj57\" (UID: \"c1abdef6-52c6-4c82-b750-a46910bbb108\") " pod="openshift-marketplace/redhat-operators-xvj57" Jan 27 13:13:18 crc kubenswrapper[4786]: I0127 13:13:18.156743 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvj57" Jan 27 13:13:18 crc kubenswrapper[4786]: I0127 13:13:18.554542 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvj57"] Jan 27 13:13:18 crc kubenswrapper[4786]: I0127 13:13:18.827066 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mn8gs"] Jan 27 13:13:18 crc kubenswrapper[4786]: I0127 13:13:18.828730 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:13:18 crc kubenswrapper[4786]: I0127 13:13:18.831788 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 13:13:18 crc kubenswrapper[4786]: I0127 13:13:18.836747 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mn8gs"] Jan 27 13:13:18 crc kubenswrapper[4786]: I0127 13:13:18.949956 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkxvm\" (UniqueName: \"kubernetes.io/projected/641b2493-f39b-4215-b189-73893b0e03a4-kube-api-access-pkxvm\") pod \"certified-operators-mn8gs\" (UID: \"641b2493-f39b-4215-b189-73893b0e03a4\") " pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:13:18 crc kubenswrapper[4786]: I0127 13:13:18.950061 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/641b2493-f39b-4215-b189-73893b0e03a4-utilities\") pod \"certified-operators-mn8gs\" (UID: \"641b2493-f39b-4215-b189-73893b0e03a4\") " pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:13:18 crc kubenswrapper[4786]: I0127 13:13:18.950112 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/641b2493-f39b-4215-b189-73893b0e03a4-catalog-content\") pod \"certified-operators-mn8gs\" (UID: \"641b2493-f39b-4215-b189-73893b0e03a4\") " pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:13:19 crc kubenswrapper[4786]: I0127 13:13:19.051271 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/641b2493-f39b-4215-b189-73893b0e03a4-catalog-content\") pod \"certified-operators-mn8gs\" (UID: \"641b2493-f39b-4215-b189-73893b0e03a4\") " pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:13:19 crc kubenswrapper[4786]: I0127 13:13:19.051347 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkxvm\" (UniqueName: \"kubernetes.io/projected/641b2493-f39b-4215-b189-73893b0e03a4-kube-api-access-pkxvm\") pod \"certified-operators-mn8gs\" (UID: \"641b2493-f39b-4215-b189-73893b0e03a4\") " pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:13:19 crc kubenswrapper[4786]: I0127 13:13:19.051411 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/641b2493-f39b-4215-b189-73893b0e03a4-utilities\") pod \"certified-operators-mn8gs\" (UID: \"641b2493-f39b-4215-b189-73893b0e03a4\") " pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:13:19 crc kubenswrapper[4786]: I0127 13:13:19.051909 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/641b2493-f39b-4215-b189-73893b0e03a4-utilities\") pod \"certified-operators-mn8gs\" (UID: \"641b2493-f39b-4215-b189-73893b0e03a4\") " pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:13:19 crc kubenswrapper[4786]: I0127 13:13:19.052014 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/641b2493-f39b-4215-b189-73893b0e03a4-catalog-content\") pod \"certified-operators-mn8gs\" (UID: \"641b2493-f39b-4215-b189-73893b0e03a4\") " pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:13:19 crc kubenswrapper[4786]: I0127 13:13:19.076754 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkxvm\" (UniqueName: \"kubernetes.io/projected/641b2493-f39b-4215-b189-73893b0e03a4-kube-api-access-pkxvm\") pod \"certified-operators-mn8gs\" (UID: \"641b2493-f39b-4215-b189-73893b0e03a4\") " pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:13:19 crc kubenswrapper[4786]: I0127 13:13:19.194972 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:13:19 crc kubenswrapper[4786]: I0127 13:13:19.553123 4786 generic.go:334] "Generic (PLEG): container finished" podID="c1abdef6-52c6-4c82-b750-a46910bbb108" containerID="8c87d6fca6dff4f92a19057b5359696a3f33467fab2cadff14ff5991d6f6c4ab" exitCode=0 Jan 27 13:13:19 crc kubenswrapper[4786]: I0127 13:13:19.553244 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvj57" event={"ID":"c1abdef6-52c6-4c82-b750-a46910bbb108","Type":"ContainerDied","Data":"8c87d6fca6dff4f92a19057b5359696a3f33467fab2cadff14ff5991d6f6c4ab"} Jan 27 13:13:19 crc kubenswrapper[4786]: I0127 13:13:19.553456 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvj57" event={"ID":"c1abdef6-52c6-4c82-b750-a46910bbb108","Type":"ContainerStarted","Data":"e82901297d5981d1a33000033b35ac9971ceafd80fa6fef403aea9dd3a529d8f"} Jan 27 13:13:19 crc kubenswrapper[4786]: I0127 13:13:19.630340 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mn8gs"] Jan 27 13:13:19 crc kubenswrapper[4786]: W0127 13:13:19.637671 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod641b2493_f39b_4215_b189_73893b0e03a4.slice/crio-6edfd0d72d3bf3e3ba5ec8600b886de84426e9738b351270f1e806aa52a663fa WatchSource:0}: Error finding container 6edfd0d72d3bf3e3ba5ec8600b886de84426e9738b351270f1e806aa52a663fa: Status 404 returned error can't find the container with id 6edfd0d72d3bf3e3ba5ec8600b886de84426e9738b351270f1e806aa52a663fa Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.227761 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-64c65"] Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.229152 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-64c65" Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.231874 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.234781 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-64c65"] Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.367018 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2-catalog-content\") pod \"community-operators-64c65\" (UID: \"84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2\") " pod="openshift-marketplace/community-operators-64c65" Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.367346 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4dth\" (UniqueName: \"kubernetes.io/projected/84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2-kube-api-access-r4dth\") pod \"community-operators-64c65\" (UID: \"84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2\") " pod="openshift-marketplace/community-operators-64c65" Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.367368 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2-utilities\") pod \"community-operators-64c65\" (UID: \"84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2\") " pod="openshift-marketplace/community-operators-64c65" Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.468527 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2-catalog-content\") pod \"community-operators-64c65\" (UID: \"84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2\") " pod="openshift-marketplace/community-operators-64c65" Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.468566 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4dth\" (UniqueName: \"kubernetes.io/projected/84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2-kube-api-access-r4dth\") pod \"community-operators-64c65\" (UID: \"84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2\") " pod="openshift-marketplace/community-operators-64c65" Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.468584 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2-utilities\") pod \"community-operators-64c65\" (UID: \"84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2\") " pod="openshift-marketplace/community-operators-64c65" Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.469098 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2-utilities\") pod \"community-operators-64c65\" (UID: \"84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2\") " pod="openshift-marketplace/community-operators-64c65" Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.469168 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2-catalog-content\") pod \"community-operators-64c65\" (UID: \"84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2\") " pod="openshift-marketplace/community-operators-64c65" Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.490394 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4dth\" (UniqueName: \"kubernetes.io/projected/84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2-kube-api-access-r4dth\") pod \"community-operators-64c65\" (UID: \"84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2\") " pod="openshift-marketplace/community-operators-64c65" Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.559924 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvj57" event={"ID":"c1abdef6-52c6-4c82-b750-a46910bbb108","Type":"ContainerStarted","Data":"e730ffdf070eefb4d19e8a52266cefacdd96639d70e99d02d495681416dd2e37"} Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.561366 4786 generic.go:334] "Generic (PLEG): container finished" podID="641b2493-f39b-4215-b189-73893b0e03a4" containerID="06731271ae6d9fce73f5097614d9f548fa031fc1e69df2cc2c7632b479093b7c" exitCode=0 Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.561418 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn8gs" event={"ID":"641b2493-f39b-4215-b189-73893b0e03a4","Type":"ContainerDied","Data":"06731271ae6d9fce73f5097614d9f548fa031fc1e69df2cc2c7632b479093b7c"} Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.561449 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn8gs" event={"ID":"641b2493-f39b-4215-b189-73893b0e03a4","Type":"ContainerStarted","Data":"6edfd0d72d3bf3e3ba5ec8600b886de84426e9738b351270f1e806aa52a663fa"} Jan 27 13:13:20 crc kubenswrapper[4786]: I0127 13:13:20.570369 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-64c65" Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.042075 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-64c65"] Jan 27 13:13:21 crc kubenswrapper[4786]: W0127 13:13:21.051384 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84df5426_1bd1_4e68_bf2b_a3e7ef1fd9a2.slice/crio-fd6429fe99298a145f924ade0909f8eb6ea4b13235f79ac3fcebcc1372c68dbe WatchSource:0}: Error finding container fd6429fe99298a145f924ade0909f8eb6ea4b13235f79ac3fcebcc1372c68dbe: Status 404 returned error can't find the container with id fd6429fe99298a145f924ade0909f8eb6ea4b13235f79ac3fcebcc1372c68dbe Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.224661 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jsgtz"] Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.225872 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jsgtz" Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.231338 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.234831 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsgtz"] Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.379803 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74e8e0b9-48df-45eb-a0dd-72271431991c-catalog-content\") pod \"redhat-marketplace-jsgtz\" (UID: \"74e8e0b9-48df-45eb-a0dd-72271431991c\") " pod="openshift-marketplace/redhat-marketplace-jsgtz" Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.379846 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74e8e0b9-48df-45eb-a0dd-72271431991c-utilities\") pod \"redhat-marketplace-jsgtz\" (UID: \"74e8e0b9-48df-45eb-a0dd-72271431991c\") " pod="openshift-marketplace/redhat-marketplace-jsgtz" Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.379866 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmkc9\" (UniqueName: \"kubernetes.io/projected/74e8e0b9-48df-45eb-a0dd-72271431991c-kube-api-access-vmkc9\") pod \"redhat-marketplace-jsgtz\" (UID: \"74e8e0b9-48df-45eb-a0dd-72271431991c\") " pod="openshift-marketplace/redhat-marketplace-jsgtz" Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.480923 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74e8e0b9-48df-45eb-a0dd-72271431991c-catalog-content\") pod \"redhat-marketplace-jsgtz\" (UID: \"74e8e0b9-48df-45eb-a0dd-72271431991c\") " pod="openshift-marketplace/redhat-marketplace-jsgtz" Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.481272 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmkc9\" (UniqueName: \"kubernetes.io/projected/74e8e0b9-48df-45eb-a0dd-72271431991c-kube-api-access-vmkc9\") pod \"redhat-marketplace-jsgtz\" (UID: \"74e8e0b9-48df-45eb-a0dd-72271431991c\") " pod="openshift-marketplace/redhat-marketplace-jsgtz" Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.481290 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74e8e0b9-48df-45eb-a0dd-72271431991c-utilities\") pod \"redhat-marketplace-jsgtz\" (UID: \"74e8e0b9-48df-45eb-a0dd-72271431991c\") " pod="openshift-marketplace/redhat-marketplace-jsgtz" Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.481530 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74e8e0b9-48df-45eb-a0dd-72271431991c-catalog-content\") pod \"redhat-marketplace-jsgtz\" (UID: \"74e8e0b9-48df-45eb-a0dd-72271431991c\") " pod="openshift-marketplace/redhat-marketplace-jsgtz" Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.481729 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74e8e0b9-48df-45eb-a0dd-72271431991c-utilities\") pod \"redhat-marketplace-jsgtz\" (UID: \"74e8e0b9-48df-45eb-a0dd-72271431991c\") " pod="openshift-marketplace/redhat-marketplace-jsgtz" Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.510341 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmkc9\" (UniqueName: \"kubernetes.io/projected/74e8e0b9-48df-45eb-a0dd-72271431991c-kube-api-access-vmkc9\") pod \"redhat-marketplace-jsgtz\" (UID: \"74e8e0b9-48df-45eb-a0dd-72271431991c\") " pod="openshift-marketplace/redhat-marketplace-jsgtz" Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.571224 4786 generic.go:334] "Generic (PLEG): container finished" podID="84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2" containerID="9d8838f9a39dbcebc62f9e6d7d3a93d6240b35a22ce5ed449c7b742d8e3e7aa6" exitCode=0 Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.571272 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64c65" event={"ID":"84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2","Type":"ContainerDied","Data":"9d8838f9a39dbcebc62f9e6d7d3a93d6240b35a22ce5ed449c7b742d8e3e7aa6"} Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.571314 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64c65" event={"ID":"84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2","Type":"ContainerStarted","Data":"fd6429fe99298a145f924ade0909f8eb6ea4b13235f79ac3fcebcc1372c68dbe"} Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.573788 4786 generic.go:334] "Generic (PLEG): container finished" podID="c1abdef6-52c6-4c82-b750-a46910bbb108" containerID="e730ffdf070eefb4d19e8a52266cefacdd96639d70e99d02d495681416dd2e37" exitCode=0 Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.573816 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvj57" event={"ID":"c1abdef6-52c6-4c82-b750-a46910bbb108","Type":"ContainerDied","Data":"e730ffdf070eefb4d19e8a52266cefacdd96639d70e99d02d495681416dd2e37"} Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.579590 4786 generic.go:334] "Generic (PLEG): container finished" podID="641b2493-f39b-4215-b189-73893b0e03a4" containerID="fa914eb71fbd54771df447a56b68d2bf1830313051984dfed2e9225e1817de7d" exitCode=0 Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.579660 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn8gs" event={"ID":"641b2493-f39b-4215-b189-73893b0e03a4","Type":"ContainerDied","Data":"fa914eb71fbd54771df447a56b68d2bf1830313051984dfed2e9225e1817de7d"} Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.607584 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jsgtz" Jan 27 13:13:21 crc kubenswrapper[4786]: I0127 13:13:21.982489 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsgtz"] Jan 27 13:13:22 crc kubenswrapper[4786]: I0127 13:13:22.588293 4786 generic.go:334] "Generic (PLEG): container finished" podID="74e8e0b9-48df-45eb-a0dd-72271431991c" containerID="7be65c04c4f73b2dc96210d670cd76ea224917bd7a654d9cd3e7f922f835b9e4" exitCode=0 Jan 27 13:13:22 crc kubenswrapper[4786]: I0127 13:13:22.588998 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsgtz" event={"ID":"74e8e0b9-48df-45eb-a0dd-72271431991c","Type":"ContainerDied","Data":"7be65c04c4f73b2dc96210d670cd76ea224917bd7a654d9cd3e7f922f835b9e4"} Jan 27 13:13:22 crc kubenswrapper[4786]: I0127 13:13:22.589071 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsgtz" event={"ID":"74e8e0b9-48df-45eb-a0dd-72271431991c","Type":"ContainerStarted","Data":"1732fd1a5e713f87962365a072ea159abc3615e01cb95ac2e874f69ce0ed76ef"} Jan 27 13:13:22 crc kubenswrapper[4786]: I0127 13:13:22.601349 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64c65" event={"ID":"84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2","Type":"ContainerStarted","Data":"a26085c2e96a49e7b00fdae913af06b3b034106d66e9c7e423c6551adebd82a7"} Jan 27 13:13:22 crc kubenswrapper[4786]: I0127 13:13:22.607269 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvj57" event={"ID":"c1abdef6-52c6-4c82-b750-a46910bbb108","Type":"ContainerStarted","Data":"bba2c9509faacfee884ce5caca6c2d51776757812dd01030aa63da9c8536b07f"} Jan 27 13:13:22 crc kubenswrapper[4786]: I0127 13:13:22.616989 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn8gs" event={"ID":"641b2493-f39b-4215-b189-73893b0e03a4","Type":"ContainerStarted","Data":"ea9b4a1871b32cbcdb121c42b00de91b3f51e46e9b4b7c2959120119dd04f74d"} Jan 27 13:13:22 crc kubenswrapper[4786]: I0127 13:13:22.636257 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mn8gs" podStartSLOduration=3.17075534 podStartE2EDuration="4.636232912s" podCreationTimestamp="2026-01-27 13:13:18 +0000 UTC" firstStartedPulling="2026-01-27 13:13:20.562453417 +0000 UTC m=+383.773067546" lastFinishedPulling="2026-01-27 13:13:22.027930999 +0000 UTC m=+385.238545118" observedRunningTime="2026-01-27 13:13:22.635185324 +0000 UTC m=+385.845799443" watchObservedRunningTime="2026-01-27 13:13:22.636232912 +0000 UTC m=+385.846847041" Jan 27 13:13:22 crc kubenswrapper[4786]: I0127 13:13:22.662190 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xvj57" podStartSLOduration=3.261785191 podStartE2EDuration="5.662171984s" podCreationTimestamp="2026-01-27 13:13:17 +0000 UTC" firstStartedPulling="2026-01-27 13:13:19.55517217 +0000 UTC m=+382.765786289" lastFinishedPulling="2026-01-27 13:13:21.955558963 +0000 UTC m=+385.166173082" observedRunningTime="2026-01-27 13:13:22.660527059 +0000 UTC m=+385.871141188" watchObservedRunningTime="2026-01-27 13:13:22.662171984 +0000 UTC m=+385.872786103" Jan 27 13:13:23 crc kubenswrapper[4786]: I0127 13:13:23.622834 4786 generic.go:334] "Generic (PLEG): container finished" podID="74e8e0b9-48df-45eb-a0dd-72271431991c" containerID="f573716cf275475df9b963330345019a0ed5cae42c70bd6a0debef67212d3c9b" exitCode=0 Jan 27 13:13:23 crc kubenswrapper[4786]: I0127 13:13:23.622948 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsgtz" event={"ID":"74e8e0b9-48df-45eb-a0dd-72271431991c","Type":"ContainerDied","Data":"f573716cf275475df9b963330345019a0ed5cae42c70bd6a0debef67212d3c9b"} Jan 27 13:13:23 crc kubenswrapper[4786]: I0127 13:13:23.628199 4786 generic.go:334] "Generic (PLEG): container finished" podID="84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2" containerID="a26085c2e96a49e7b00fdae913af06b3b034106d66e9c7e423c6551adebd82a7" exitCode=0 Jan 27 13:13:23 crc kubenswrapper[4786]: I0127 13:13:23.628293 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64c65" event={"ID":"84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2","Type":"ContainerDied","Data":"a26085c2e96a49e7b00fdae913af06b3b034106d66e9c7e423c6551adebd82a7"} Jan 27 13:13:23 crc kubenswrapper[4786]: I0127 13:13:23.628349 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64c65" event={"ID":"84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2","Type":"ContainerStarted","Data":"5674349a68c086a0485a591a0d8c0d218acb6caa72b9594bb3bdb263e1b6076d"} Jan 27 13:13:23 crc kubenswrapper[4786]: I0127 13:13:23.659239 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-64c65" podStartSLOduration=2.210905225 podStartE2EDuration="3.659220994s" podCreationTimestamp="2026-01-27 13:13:20 +0000 UTC" firstStartedPulling="2026-01-27 13:13:21.573447104 +0000 UTC m=+384.784061213" lastFinishedPulling="2026-01-27 13:13:23.021762843 +0000 UTC m=+386.232376982" observedRunningTime="2026-01-27 13:13:23.658196706 +0000 UTC m=+386.868810825" watchObservedRunningTime="2026-01-27 13:13:23.659220994 +0000 UTC m=+386.869835113" Jan 27 13:13:25 crc kubenswrapper[4786]: I0127 13:13:25.640440 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsgtz" event={"ID":"74e8e0b9-48df-45eb-a0dd-72271431991c","Type":"ContainerStarted","Data":"eac50555cea8de4ad7a3aa6ae7389b380e81dac49f9b2644c5ace23d6312f3f7"} Jan 27 13:13:25 crc kubenswrapper[4786]: I0127 13:13:25.659062 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jsgtz" podStartSLOduration=3.233236458 podStartE2EDuration="4.659044259s" podCreationTimestamp="2026-01-27 13:13:21 +0000 UTC" firstStartedPulling="2026-01-27 13:13:22.589961191 +0000 UTC m=+385.800575300" lastFinishedPulling="2026-01-27 13:13:24.015768982 +0000 UTC m=+387.226383101" observedRunningTime="2026-01-27 13:13:25.654400134 +0000 UTC m=+388.865014263" watchObservedRunningTime="2026-01-27 13:13:25.659044259 +0000 UTC m=+388.869658378" Jan 27 13:13:26 crc kubenswrapper[4786]: I0127 13:13:26.677211 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9tfqt" Jan 27 13:13:26 crc kubenswrapper[4786]: I0127 13:13:26.730809 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qrqjl"] Jan 27 13:13:28 crc kubenswrapper[4786]: I0127 13:13:28.157871 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xvj57" Jan 27 13:13:28 crc kubenswrapper[4786]: I0127 13:13:28.157920 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xvj57" Jan 27 13:13:28 crc kubenswrapper[4786]: I0127 13:13:28.200486 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xvj57" Jan 27 13:13:28 crc kubenswrapper[4786]: I0127 13:13:28.698271 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xvj57" Jan 27 13:13:29 crc kubenswrapper[4786]: I0127 13:13:29.195802 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:13:29 crc kubenswrapper[4786]: I0127 13:13:29.195877 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:13:29 crc kubenswrapper[4786]: I0127 13:13:29.234810 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:13:29 crc kubenswrapper[4786]: I0127 13:13:29.695202 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:13:30 crc kubenswrapper[4786]: I0127 13:13:30.570664 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-64c65" Jan 27 13:13:30 crc kubenswrapper[4786]: I0127 13:13:30.570935 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-64c65" Jan 27 13:13:30 crc kubenswrapper[4786]: I0127 13:13:30.607129 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-64c65" Jan 27 13:13:30 crc kubenswrapper[4786]: I0127 13:13:30.700896 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-64c65" Jan 27 13:13:31 crc kubenswrapper[4786]: I0127 13:13:31.608104 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jsgtz" Jan 27 13:13:31 crc kubenswrapper[4786]: I0127 13:13:31.609042 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jsgtz" Jan 27 13:13:31 crc kubenswrapper[4786]: I0127 13:13:31.644280 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jsgtz" Jan 27 13:13:31 crc kubenswrapper[4786]: I0127 13:13:31.712169 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jsgtz" Jan 27 13:13:39 crc kubenswrapper[4786]: I0127 13:13:39.532218 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:13:39 crc kubenswrapper[4786]: I0127 13:13:39.532526 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:13:51 crc kubenswrapper[4786]: I0127 13:13:51.771800 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" podUID="6adde762-4e97-44eb-a96c-14a79ec7998a" containerName="registry" containerID="cri-o://48c8498cc54d432918b9729360a9e03e4b79eab632adaf0228e242007174bb14" gracePeriod=30 Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.251028 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.412253 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9vft\" (UniqueName: \"kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-kube-api-access-w9vft\") pod \"6adde762-4e97-44eb-a96c-14a79ec7998a\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.412371 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-bound-sa-token\") pod \"6adde762-4e97-44eb-a96c-14a79ec7998a\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.412658 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"6adde762-4e97-44eb-a96c-14a79ec7998a\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.412695 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6adde762-4e97-44eb-a96c-14a79ec7998a-registry-certificates\") pod \"6adde762-4e97-44eb-a96c-14a79ec7998a\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.412719 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6adde762-4e97-44eb-a96c-14a79ec7998a-ca-trust-extracted\") pod \"6adde762-4e97-44eb-a96c-14a79ec7998a\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.412745 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6adde762-4e97-44eb-a96c-14a79ec7998a-trusted-ca\") pod \"6adde762-4e97-44eb-a96c-14a79ec7998a\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.412780 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-registry-tls\") pod \"6adde762-4e97-44eb-a96c-14a79ec7998a\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.412803 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6adde762-4e97-44eb-a96c-14a79ec7998a-installation-pull-secrets\") pod \"6adde762-4e97-44eb-a96c-14a79ec7998a\" (UID: \"6adde762-4e97-44eb-a96c-14a79ec7998a\") " Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.413462 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6adde762-4e97-44eb-a96c-14a79ec7998a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "6adde762-4e97-44eb-a96c-14a79ec7998a" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.415360 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6adde762-4e97-44eb-a96c-14a79ec7998a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "6adde762-4e97-44eb-a96c-14a79ec7998a" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.418336 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "6adde762-4e97-44eb-a96c-14a79ec7998a" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.422311 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-kube-api-access-w9vft" (OuterVolumeSpecName: "kube-api-access-w9vft") pod "6adde762-4e97-44eb-a96c-14a79ec7998a" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a"). InnerVolumeSpecName "kube-api-access-w9vft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.423227 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "6adde762-4e97-44eb-a96c-14a79ec7998a" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.423271 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "6adde762-4e97-44eb-a96c-14a79ec7998a" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.423609 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6adde762-4e97-44eb-a96c-14a79ec7998a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "6adde762-4e97-44eb-a96c-14a79ec7998a" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.431868 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6adde762-4e97-44eb-a96c-14a79ec7998a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "6adde762-4e97-44eb-a96c-14a79ec7998a" (UID: "6adde762-4e97-44eb-a96c-14a79ec7998a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.515051 4786 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.515114 4786 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6adde762-4e97-44eb-a96c-14a79ec7998a-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.515136 4786 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6adde762-4e97-44eb-a96c-14a79ec7998a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.515156 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6adde762-4e97-44eb-a96c-14a79ec7998a-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.515181 4786 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.515205 4786 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6adde762-4e97-44eb-a96c-14a79ec7998a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.515225 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9vft\" (UniqueName: \"kubernetes.io/projected/6adde762-4e97-44eb-a96c-14a79ec7998a-kube-api-access-w9vft\") on node \"crc\" DevicePath \"\"" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.787063 4786 generic.go:334] "Generic (PLEG): container finished" podID="6adde762-4e97-44eb-a96c-14a79ec7998a" containerID="48c8498cc54d432918b9729360a9e03e4b79eab632adaf0228e242007174bb14" exitCode=0 Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.787127 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" event={"ID":"6adde762-4e97-44eb-a96c-14a79ec7998a","Type":"ContainerDied","Data":"48c8498cc54d432918b9729360a9e03e4b79eab632adaf0228e242007174bb14"} Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.787140 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.787181 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qrqjl" event={"ID":"6adde762-4e97-44eb-a96c-14a79ec7998a","Type":"ContainerDied","Data":"99c0216ee1db2fa0acebbece627e56ff8682441f61c9a857ae28bb7bdc41094c"} Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.787210 4786 scope.go:117] "RemoveContainer" containerID="48c8498cc54d432918b9729360a9e03e4b79eab632adaf0228e242007174bb14" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.828203 4786 scope.go:117] "RemoveContainer" containerID="48c8498cc54d432918b9729360a9e03e4b79eab632adaf0228e242007174bb14" Jan 27 13:13:52 crc kubenswrapper[4786]: E0127 13:13:52.835562 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48c8498cc54d432918b9729360a9e03e4b79eab632adaf0228e242007174bb14\": container with ID starting with 48c8498cc54d432918b9729360a9e03e4b79eab632adaf0228e242007174bb14 not found: ID does not exist" containerID="48c8498cc54d432918b9729360a9e03e4b79eab632adaf0228e242007174bb14" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.835706 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48c8498cc54d432918b9729360a9e03e4b79eab632adaf0228e242007174bb14"} err="failed to get container status \"48c8498cc54d432918b9729360a9e03e4b79eab632adaf0228e242007174bb14\": rpc error: code = NotFound desc = could not find container \"48c8498cc54d432918b9729360a9e03e4b79eab632adaf0228e242007174bb14\": container with ID starting with 48c8498cc54d432918b9729360a9e03e4b79eab632adaf0228e242007174bb14 not found: ID does not exist" Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.838349 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qrqjl"] Jan 27 13:13:52 crc kubenswrapper[4786]: I0127 13:13:52.846557 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qrqjl"] Jan 27 13:13:53 crc kubenswrapper[4786]: I0127 13:13:53.472208 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6adde762-4e97-44eb-a96c-14a79ec7998a" path="/var/lib/kubelet/pods/6adde762-4e97-44eb-a96c-14a79ec7998a/volumes" Jan 27 13:14:09 crc kubenswrapper[4786]: I0127 13:14:09.532574 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:14:09 crc kubenswrapper[4786]: I0127 13:14:09.533252 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:14:09 crc kubenswrapper[4786]: I0127 13:14:09.533310 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:14:09 crc kubenswrapper[4786]: I0127 13:14:09.534158 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68b3b3585ee3cd83b41e2ece5024314ee69b7da4279bac6a5facbdf2f311dbb0"} pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:14:09 crc kubenswrapper[4786]: I0127 13:14:09.534276 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" containerID="cri-o://68b3b3585ee3cd83b41e2ece5024314ee69b7da4279bac6a5facbdf2f311dbb0" gracePeriod=600 Jan 27 13:14:09 crc kubenswrapper[4786]: I0127 13:14:09.894558 4786 generic.go:334] "Generic (PLEG): container finished" podID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerID="68b3b3585ee3cd83b41e2ece5024314ee69b7da4279bac6a5facbdf2f311dbb0" exitCode=0 Jan 27 13:14:09 crc kubenswrapper[4786]: I0127 13:14:09.894636 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerDied","Data":"68b3b3585ee3cd83b41e2ece5024314ee69b7da4279bac6a5facbdf2f311dbb0"} Jan 27 13:14:09 crc kubenswrapper[4786]: I0127 13:14:09.894952 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerStarted","Data":"ba183c40aadd721fd57aabbb3b26c846c14e0b60f4d9d824803d67f76f6da170"} Jan 27 13:14:09 crc kubenswrapper[4786]: I0127 13:14:09.894978 4786 scope.go:117] "RemoveContainer" containerID="8c39d828ab97f5bcb6d5e29df28ec844ede201872825065370a69b8ec3b035e7" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.164978 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz"] Jan 27 13:15:00 crc kubenswrapper[4786]: E0127 13:15:00.165810 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6adde762-4e97-44eb-a96c-14a79ec7998a" containerName="registry" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.165827 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6adde762-4e97-44eb-a96c-14a79ec7998a" containerName="registry" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.165948 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="6adde762-4e97-44eb-a96c-14a79ec7998a" containerName="registry" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.166407 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.168082 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.168772 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.171451 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz"] Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.245192 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a71a23f-4a90-4716-9562-f7d45ed47208-config-volume\") pod \"collect-profiles-29491995-v9pkz\" (UID: \"3a71a23f-4a90-4716-9562-f7d45ed47208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.245317 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a71a23f-4a90-4716-9562-f7d45ed47208-secret-volume\") pod \"collect-profiles-29491995-v9pkz\" (UID: \"3a71a23f-4a90-4716-9562-f7d45ed47208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.245344 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt5vs\" (UniqueName: \"kubernetes.io/projected/3a71a23f-4a90-4716-9562-f7d45ed47208-kube-api-access-qt5vs\") pod \"collect-profiles-29491995-v9pkz\" (UID: \"3a71a23f-4a90-4716-9562-f7d45ed47208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.346887 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a71a23f-4a90-4716-9562-f7d45ed47208-secret-volume\") pod \"collect-profiles-29491995-v9pkz\" (UID: \"3a71a23f-4a90-4716-9562-f7d45ed47208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.346927 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt5vs\" (UniqueName: \"kubernetes.io/projected/3a71a23f-4a90-4716-9562-f7d45ed47208-kube-api-access-qt5vs\") pod \"collect-profiles-29491995-v9pkz\" (UID: \"3a71a23f-4a90-4716-9562-f7d45ed47208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.346948 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a71a23f-4a90-4716-9562-f7d45ed47208-config-volume\") pod \"collect-profiles-29491995-v9pkz\" (UID: \"3a71a23f-4a90-4716-9562-f7d45ed47208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.347833 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a71a23f-4a90-4716-9562-f7d45ed47208-config-volume\") pod \"collect-profiles-29491995-v9pkz\" (UID: \"3a71a23f-4a90-4716-9562-f7d45ed47208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.359379 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a71a23f-4a90-4716-9562-f7d45ed47208-secret-volume\") pod \"collect-profiles-29491995-v9pkz\" (UID: \"3a71a23f-4a90-4716-9562-f7d45ed47208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.366254 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt5vs\" (UniqueName: \"kubernetes.io/projected/3a71a23f-4a90-4716-9562-f7d45ed47208-kube-api-access-qt5vs\") pod \"collect-profiles-29491995-v9pkz\" (UID: \"3a71a23f-4a90-4716-9562-f7d45ed47208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.483776 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" Jan 27 13:15:00 crc kubenswrapper[4786]: I0127 13:15:00.878244 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz"] Jan 27 13:15:01 crc kubenswrapper[4786]: I0127 13:15:01.192999 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" event={"ID":"3a71a23f-4a90-4716-9562-f7d45ed47208","Type":"ContainerStarted","Data":"b57d0b4d55fd04b638bd920b8ad6f665860ad74ca564f522e23683bd49c10ba7"} Jan 27 13:15:01 crc kubenswrapper[4786]: I0127 13:15:01.193315 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" event={"ID":"3a71a23f-4a90-4716-9562-f7d45ed47208","Type":"ContainerStarted","Data":"f0ed734fb7f64c28d663f8a94521f2af8e34c824d3e83ce307b83dc73888dd29"} Jan 27 13:15:01 crc kubenswrapper[4786]: I0127 13:15:01.207723 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" podStartSLOduration=1.207702483 podStartE2EDuration="1.207702483s" podCreationTimestamp="2026-01-27 13:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:15:01.206424019 +0000 UTC m=+484.417038148" watchObservedRunningTime="2026-01-27 13:15:01.207702483 +0000 UTC m=+484.418316602" Jan 27 13:15:02 crc kubenswrapper[4786]: I0127 13:15:02.199766 4786 generic.go:334] "Generic (PLEG): container finished" podID="3a71a23f-4a90-4716-9562-f7d45ed47208" containerID="b57d0b4d55fd04b638bd920b8ad6f665860ad74ca564f522e23683bd49c10ba7" exitCode=0 Jan 27 13:15:02 crc kubenswrapper[4786]: I0127 13:15:02.199850 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" event={"ID":"3a71a23f-4a90-4716-9562-f7d45ed47208","Type":"ContainerDied","Data":"b57d0b4d55fd04b638bd920b8ad6f665860ad74ca564f522e23683bd49c10ba7"} Jan 27 13:15:03 crc kubenswrapper[4786]: I0127 13:15:03.405476 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" Jan 27 13:15:03 crc kubenswrapper[4786]: I0127 13:15:03.482070 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a71a23f-4a90-4716-9562-f7d45ed47208-secret-volume\") pod \"3a71a23f-4a90-4716-9562-f7d45ed47208\" (UID: \"3a71a23f-4a90-4716-9562-f7d45ed47208\") " Jan 27 13:15:03 crc kubenswrapper[4786]: I0127 13:15:03.482138 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a71a23f-4a90-4716-9562-f7d45ed47208-config-volume\") pod \"3a71a23f-4a90-4716-9562-f7d45ed47208\" (UID: \"3a71a23f-4a90-4716-9562-f7d45ed47208\") " Jan 27 13:15:03 crc kubenswrapper[4786]: I0127 13:15:03.482261 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt5vs\" (UniqueName: \"kubernetes.io/projected/3a71a23f-4a90-4716-9562-f7d45ed47208-kube-api-access-qt5vs\") pod \"3a71a23f-4a90-4716-9562-f7d45ed47208\" (UID: \"3a71a23f-4a90-4716-9562-f7d45ed47208\") " Jan 27 13:15:03 crc kubenswrapper[4786]: I0127 13:15:03.482625 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a71a23f-4a90-4716-9562-f7d45ed47208-config-volume" (OuterVolumeSpecName: "config-volume") pod "3a71a23f-4a90-4716-9562-f7d45ed47208" (UID: "3a71a23f-4a90-4716-9562-f7d45ed47208"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:15:03 crc kubenswrapper[4786]: I0127 13:15:03.487212 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a71a23f-4a90-4716-9562-f7d45ed47208-kube-api-access-qt5vs" (OuterVolumeSpecName: "kube-api-access-qt5vs") pod "3a71a23f-4a90-4716-9562-f7d45ed47208" (UID: "3a71a23f-4a90-4716-9562-f7d45ed47208"). InnerVolumeSpecName "kube-api-access-qt5vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:15:03 crc kubenswrapper[4786]: I0127 13:15:03.487808 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a71a23f-4a90-4716-9562-f7d45ed47208-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3a71a23f-4a90-4716-9562-f7d45ed47208" (UID: "3a71a23f-4a90-4716-9562-f7d45ed47208"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:15:03 crc kubenswrapper[4786]: I0127 13:15:03.583922 4786 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a71a23f-4a90-4716-9562-f7d45ed47208-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:15:03 crc kubenswrapper[4786]: I0127 13:15:03.583976 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a71a23f-4a90-4716-9562-f7d45ed47208-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:15:03 crc kubenswrapper[4786]: I0127 13:15:03.583998 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt5vs\" (UniqueName: \"kubernetes.io/projected/3a71a23f-4a90-4716-9562-f7d45ed47208-kube-api-access-qt5vs\") on node \"crc\" DevicePath \"\"" Jan 27 13:15:04 crc kubenswrapper[4786]: I0127 13:15:04.211694 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" event={"ID":"3a71a23f-4a90-4716-9562-f7d45ed47208","Type":"ContainerDied","Data":"f0ed734fb7f64c28d663f8a94521f2af8e34c824d3e83ce307b83dc73888dd29"} Jan 27 13:15:04 crc kubenswrapper[4786]: I0127 13:15:04.211737 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0ed734fb7f64c28d663f8a94521f2af8e34c824d3e83ce307b83dc73888dd29" Jan 27 13:15:04 crc kubenswrapper[4786]: I0127 13:15:04.211773 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491995-v9pkz" Jan 27 13:16:09 crc kubenswrapper[4786]: I0127 13:16:09.532524 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:16:09 crc kubenswrapper[4786]: I0127 13:16:09.533105 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:16:39 crc kubenswrapper[4786]: I0127 13:16:39.533250 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:16:39 crc kubenswrapper[4786]: I0127 13:16:39.533771 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:17:09 crc kubenswrapper[4786]: I0127 13:17:09.532292 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:17:09 crc kubenswrapper[4786]: I0127 13:17:09.532846 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:17:09 crc kubenswrapper[4786]: I0127 13:17:09.532888 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:17:09 crc kubenswrapper[4786]: I0127 13:17:09.533306 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba183c40aadd721fd57aabbb3b26c846c14e0b60f4d9d824803d67f76f6da170"} pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:17:09 crc kubenswrapper[4786]: I0127 13:17:09.533423 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" containerID="cri-o://ba183c40aadd721fd57aabbb3b26c846c14e0b60f4d9d824803d67f76f6da170" gracePeriod=600 Jan 27 13:17:09 crc kubenswrapper[4786]: I0127 13:17:09.897113 4786 generic.go:334] "Generic (PLEG): container finished" podID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerID="ba183c40aadd721fd57aabbb3b26c846c14e0b60f4d9d824803d67f76f6da170" exitCode=0 Jan 27 13:17:09 crc kubenswrapper[4786]: I0127 13:17:09.897162 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerDied","Data":"ba183c40aadd721fd57aabbb3b26c846c14e0b60f4d9d824803d67f76f6da170"} Jan 27 13:17:09 crc kubenswrapper[4786]: I0127 13:17:09.897198 4786 scope.go:117] "RemoveContainer" containerID="68b3b3585ee3cd83b41e2ece5024314ee69b7da4279bac6a5facbdf2f311dbb0" Jan 27 13:17:10 crc kubenswrapper[4786]: I0127 13:17:10.905112 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerStarted","Data":"b4b187fcf6836625d29a0b0273148285cc0698ee8c6ac736d4d724e9062a8e84"} Jan 27 13:19:09 crc kubenswrapper[4786]: I0127 13:19:09.533097 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:19:09 crc kubenswrapper[4786]: I0127 13:19:09.533767 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:19:32 crc kubenswrapper[4786]: I0127 13:19:32.846880 4786 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 13:19:39 crc kubenswrapper[4786]: I0127 13:19:39.537926 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:19:39 crc kubenswrapper[4786]: I0127 13:19:39.538214 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:19:42 crc kubenswrapper[4786]: I0127 13:19:42.800751 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs"] Jan 27 13:19:42 crc kubenswrapper[4786]: E0127 13:19:42.802228 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a71a23f-4a90-4716-9562-f7d45ed47208" containerName="collect-profiles" Jan 27 13:19:42 crc kubenswrapper[4786]: I0127 13:19:42.802320 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a71a23f-4a90-4716-9562-f7d45ed47208" containerName="collect-profiles" Jan 27 13:19:42 crc kubenswrapper[4786]: I0127 13:19:42.802520 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a71a23f-4a90-4716-9562-f7d45ed47208" containerName="collect-profiles" Jan 27 13:19:42 crc kubenswrapper[4786]: I0127 13:19:42.803575 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" Jan 27 13:19:42 crc kubenswrapper[4786]: I0127 13:19:42.813897 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 13:19:42 crc kubenswrapper[4786]: I0127 13:19:42.869625 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs"] Jan 27 13:19:42 crc kubenswrapper[4786]: I0127 13:19:42.942954 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvnzc\" (UniqueName: \"kubernetes.io/projected/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-kube-api-access-tvnzc\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs\" (UID: \"d2d928ff-1d55-488f-92a8-9f2e8efd62f8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" Jan 27 13:19:42 crc kubenswrapper[4786]: I0127 13:19:42.943034 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs\" (UID: \"d2d928ff-1d55-488f-92a8-9f2e8efd62f8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" Jan 27 13:19:42 crc kubenswrapper[4786]: I0127 13:19:42.943058 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs\" (UID: \"d2d928ff-1d55-488f-92a8-9f2e8efd62f8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" Jan 27 13:19:43 crc kubenswrapper[4786]: I0127 13:19:43.044277 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvnzc\" (UniqueName: \"kubernetes.io/projected/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-kube-api-access-tvnzc\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs\" (UID: \"d2d928ff-1d55-488f-92a8-9f2e8efd62f8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" Jan 27 13:19:43 crc kubenswrapper[4786]: I0127 13:19:43.044937 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs\" (UID: \"d2d928ff-1d55-488f-92a8-9f2e8efd62f8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" Jan 27 13:19:43 crc kubenswrapper[4786]: I0127 13:19:43.045123 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs\" (UID: \"d2d928ff-1d55-488f-92a8-9f2e8efd62f8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" Jan 27 13:19:43 crc kubenswrapper[4786]: I0127 13:19:43.045456 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs\" (UID: \"d2d928ff-1d55-488f-92a8-9f2e8efd62f8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" Jan 27 13:19:43 crc kubenswrapper[4786]: I0127 13:19:43.045691 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs\" (UID: \"d2d928ff-1d55-488f-92a8-9f2e8efd62f8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" Jan 27 13:19:43 crc kubenswrapper[4786]: I0127 13:19:43.066035 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvnzc\" (UniqueName: \"kubernetes.io/projected/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-kube-api-access-tvnzc\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs\" (UID: \"d2d928ff-1d55-488f-92a8-9f2e8efd62f8\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" Jan 27 13:19:43 crc kubenswrapper[4786]: I0127 13:19:43.120738 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" Jan 27 13:19:43 crc kubenswrapper[4786]: I0127 13:19:43.292352 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs"] Jan 27 13:19:43 crc kubenswrapper[4786]: I0127 13:19:43.714802 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" event={"ID":"d2d928ff-1d55-488f-92a8-9f2e8efd62f8","Type":"ContainerStarted","Data":"fc9688cbcbd003a66f0f422ad0f2c6ec3c4a5f4e8904ef2e5a3bf2eff5bcfc45"} Jan 27 13:19:43 crc kubenswrapper[4786]: I0127 13:19:43.715135 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" event={"ID":"d2d928ff-1d55-488f-92a8-9f2e8efd62f8","Type":"ContainerStarted","Data":"71e3cd4a7449e2abfa7e58bf951285e4a513094069864e153a5886b87d2be031"} Jan 27 13:19:44 crc kubenswrapper[4786]: I0127 13:19:44.514267 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x985n"] Jan 27 13:19:44 crc kubenswrapper[4786]: I0127 13:19:44.515364 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:19:44 crc kubenswrapper[4786]: I0127 13:19:44.519832 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x985n"] Jan 27 13:19:44 crc kubenswrapper[4786]: I0127 13:19:44.667734 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hlp6\" (UniqueName: \"kubernetes.io/projected/5009e0b3-cd7c-4080-9013-3749bcd40e4c-kube-api-access-6hlp6\") pod \"redhat-operators-x985n\" (UID: \"5009e0b3-cd7c-4080-9013-3749bcd40e4c\") " pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:19:44 crc kubenswrapper[4786]: I0127 13:19:44.667784 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5009e0b3-cd7c-4080-9013-3749bcd40e4c-utilities\") pod \"redhat-operators-x985n\" (UID: \"5009e0b3-cd7c-4080-9013-3749bcd40e4c\") " pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:19:44 crc kubenswrapper[4786]: I0127 13:19:44.667824 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5009e0b3-cd7c-4080-9013-3749bcd40e4c-catalog-content\") pod \"redhat-operators-x985n\" (UID: \"5009e0b3-cd7c-4080-9013-3749bcd40e4c\") " pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:19:44 crc kubenswrapper[4786]: I0127 13:19:44.721425 4786 generic.go:334] "Generic (PLEG): container finished" podID="d2d928ff-1d55-488f-92a8-9f2e8efd62f8" containerID="fc9688cbcbd003a66f0f422ad0f2c6ec3c4a5f4e8904ef2e5a3bf2eff5bcfc45" exitCode=0 Jan 27 13:19:44 crc kubenswrapper[4786]: I0127 13:19:44.721478 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" event={"ID":"d2d928ff-1d55-488f-92a8-9f2e8efd62f8","Type":"ContainerDied","Data":"fc9688cbcbd003a66f0f422ad0f2c6ec3c4a5f4e8904ef2e5a3bf2eff5bcfc45"} Jan 27 13:19:44 crc kubenswrapper[4786]: I0127 13:19:44.723333 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 13:19:44 crc kubenswrapper[4786]: I0127 13:19:44.769124 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hlp6\" (UniqueName: \"kubernetes.io/projected/5009e0b3-cd7c-4080-9013-3749bcd40e4c-kube-api-access-6hlp6\") pod \"redhat-operators-x985n\" (UID: \"5009e0b3-cd7c-4080-9013-3749bcd40e4c\") " pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:19:44 crc kubenswrapper[4786]: I0127 13:19:44.769429 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5009e0b3-cd7c-4080-9013-3749bcd40e4c-utilities\") pod \"redhat-operators-x985n\" (UID: \"5009e0b3-cd7c-4080-9013-3749bcd40e4c\") " pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:19:44 crc kubenswrapper[4786]: I0127 13:19:44.769472 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5009e0b3-cd7c-4080-9013-3749bcd40e4c-catalog-content\") pod \"redhat-operators-x985n\" (UID: \"5009e0b3-cd7c-4080-9013-3749bcd40e4c\") " pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:19:44 crc kubenswrapper[4786]: I0127 13:19:44.769901 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5009e0b3-cd7c-4080-9013-3749bcd40e4c-utilities\") pod \"redhat-operators-x985n\" (UID: \"5009e0b3-cd7c-4080-9013-3749bcd40e4c\") " pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:19:44 crc kubenswrapper[4786]: I0127 13:19:44.769952 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5009e0b3-cd7c-4080-9013-3749bcd40e4c-catalog-content\") pod \"redhat-operators-x985n\" (UID: \"5009e0b3-cd7c-4080-9013-3749bcd40e4c\") " pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:19:44 crc kubenswrapper[4786]: I0127 13:19:44.801197 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hlp6\" (UniqueName: \"kubernetes.io/projected/5009e0b3-cd7c-4080-9013-3749bcd40e4c-kube-api-access-6hlp6\") pod \"redhat-operators-x985n\" (UID: \"5009e0b3-cd7c-4080-9013-3749bcd40e4c\") " pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:19:44 crc kubenswrapper[4786]: I0127 13:19:44.835260 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:19:45 crc kubenswrapper[4786]: I0127 13:19:45.228786 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x985n"] Jan 27 13:19:45 crc kubenswrapper[4786]: I0127 13:19:45.727503 4786 generic.go:334] "Generic (PLEG): container finished" podID="5009e0b3-cd7c-4080-9013-3749bcd40e4c" containerID="e889bd53ea6fa169b8f2f3cbbe15901dade0861cbbf7b6100e4f35bece3f4af6" exitCode=0 Jan 27 13:19:45 crc kubenswrapper[4786]: I0127 13:19:45.727662 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x985n" event={"ID":"5009e0b3-cd7c-4080-9013-3749bcd40e4c","Type":"ContainerDied","Data":"e889bd53ea6fa169b8f2f3cbbe15901dade0861cbbf7b6100e4f35bece3f4af6"} Jan 27 13:19:45 crc kubenswrapper[4786]: I0127 13:19:45.727823 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x985n" event={"ID":"5009e0b3-cd7c-4080-9013-3749bcd40e4c","Type":"ContainerStarted","Data":"2c142140ba3210f351e31136c20f0270e5520645e1de6f0885138474a9f9b68f"} Jan 27 13:19:46 crc kubenswrapper[4786]: I0127 13:19:46.743155 4786 generic.go:334] "Generic (PLEG): container finished" podID="d2d928ff-1d55-488f-92a8-9f2e8efd62f8" containerID="f07d58dd3a7ac8008ea3c2c736263e1adb4654f484bf5742b84a2e7fd243d9a1" exitCode=0 Jan 27 13:19:46 crc kubenswrapper[4786]: I0127 13:19:46.743250 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" event={"ID":"d2d928ff-1d55-488f-92a8-9f2e8efd62f8","Type":"ContainerDied","Data":"f07d58dd3a7ac8008ea3c2c736263e1adb4654f484bf5742b84a2e7fd243d9a1"} Jan 27 13:19:47 crc kubenswrapper[4786]: I0127 13:19:47.752265 4786 generic.go:334] "Generic (PLEG): container finished" podID="d2d928ff-1d55-488f-92a8-9f2e8efd62f8" containerID="19163ec1d63808709e16f44a4ef91a3953eeb50d71da7c22037cf24f7ea72cf1" exitCode=0 Jan 27 13:19:47 crc kubenswrapper[4786]: I0127 13:19:47.752360 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" event={"ID":"d2d928ff-1d55-488f-92a8-9f2e8efd62f8","Type":"ContainerDied","Data":"19163ec1d63808709e16f44a4ef91a3953eeb50d71da7c22037cf24f7ea72cf1"} Jan 27 13:19:47 crc kubenswrapper[4786]: I0127 13:19:47.754270 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x985n" event={"ID":"5009e0b3-cd7c-4080-9013-3749bcd40e4c","Type":"ContainerStarted","Data":"d8bd5f9d5bfa0059779d5b5b83742196ecb4bcc33a6d296f2b9face4062d7db1"} Jan 27 13:19:48 crc kubenswrapper[4786]: I0127 13:19:48.760423 4786 generic.go:334] "Generic (PLEG): container finished" podID="5009e0b3-cd7c-4080-9013-3749bcd40e4c" containerID="d8bd5f9d5bfa0059779d5b5b83742196ecb4bcc33a6d296f2b9face4062d7db1" exitCode=0 Jan 27 13:19:48 crc kubenswrapper[4786]: I0127 13:19:48.760499 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x985n" event={"ID":"5009e0b3-cd7c-4080-9013-3749bcd40e4c","Type":"ContainerDied","Data":"d8bd5f9d5bfa0059779d5b5b83742196ecb4bcc33a6d296f2b9face4062d7db1"} Jan 27 13:19:48 crc kubenswrapper[4786]: I0127 13:19:48.987369 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" Jan 27 13:19:49 crc kubenswrapper[4786]: I0127 13:19:49.121851 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvnzc\" (UniqueName: \"kubernetes.io/projected/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-kube-api-access-tvnzc\") pod \"d2d928ff-1d55-488f-92a8-9f2e8efd62f8\" (UID: \"d2d928ff-1d55-488f-92a8-9f2e8efd62f8\") " Jan 27 13:19:49 crc kubenswrapper[4786]: I0127 13:19:49.122144 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-util\") pod \"d2d928ff-1d55-488f-92a8-9f2e8efd62f8\" (UID: \"d2d928ff-1d55-488f-92a8-9f2e8efd62f8\") " Jan 27 13:19:49 crc kubenswrapper[4786]: I0127 13:19:49.122193 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-bundle\") pod \"d2d928ff-1d55-488f-92a8-9f2e8efd62f8\" (UID: \"d2d928ff-1d55-488f-92a8-9f2e8efd62f8\") " Jan 27 13:19:49 crc kubenswrapper[4786]: I0127 13:19:49.123062 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-bundle" (OuterVolumeSpecName: "bundle") pod "d2d928ff-1d55-488f-92a8-9f2e8efd62f8" (UID: "d2d928ff-1d55-488f-92a8-9f2e8efd62f8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:19:49 crc kubenswrapper[4786]: I0127 13:19:49.127967 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-kube-api-access-tvnzc" (OuterVolumeSpecName: "kube-api-access-tvnzc") pod "d2d928ff-1d55-488f-92a8-9f2e8efd62f8" (UID: "d2d928ff-1d55-488f-92a8-9f2e8efd62f8"). InnerVolumeSpecName "kube-api-access-tvnzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:19:49 crc kubenswrapper[4786]: I0127 13:19:49.223799 4786 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:49 crc kubenswrapper[4786]: I0127 13:19:49.223845 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvnzc\" (UniqueName: \"kubernetes.io/projected/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-kube-api-access-tvnzc\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:49 crc kubenswrapper[4786]: I0127 13:19:49.427341 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-util" (OuterVolumeSpecName: "util") pod "d2d928ff-1d55-488f-92a8-9f2e8efd62f8" (UID: "d2d928ff-1d55-488f-92a8-9f2e8efd62f8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:19:49 crc kubenswrapper[4786]: I0127 13:19:49.526802 4786 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2d928ff-1d55-488f-92a8-9f2e8efd62f8-util\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:49 crc kubenswrapper[4786]: I0127 13:19:49.768325 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x985n" event={"ID":"5009e0b3-cd7c-4080-9013-3749bcd40e4c","Type":"ContainerStarted","Data":"49c00db2b0316648f133117345c2df5fc64a0fe39449b13a1bf5616794897926"} Jan 27 13:19:49 crc kubenswrapper[4786]: I0127 13:19:49.770309 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" event={"ID":"d2d928ff-1d55-488f-92a8-9f2e8efd62f8","Type":"ContainerDied","Data":"71e3cd4a7449e2abfa7e58bf951285e4a513094069864e153a5886b87d2be031"} Jan 27 13:19:49 crc kubenswrapper[4786]: I0127 13:19:49.770341 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs" Jan 27 13:19:49 crc kubenswrapper[4786]: I0127 13:19:49.770356 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71e3cd4a7449e2abfa7e58bf951285e4a513094069864e153a5886b87d2be031" Jan 27 13:19:50 crc kubenswrapper[4786]: I0127 13:19:50.028125 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x985n" podStartSLOduration=3.263667771 podStartE2EDuration="6.028089781s" podCreationTimestamp="2026-01-27 13:19:44 +0000 UTC" firstStartedPulling="2026-01-27 13:19:46.744984609 +0000 UTC m=+769.955598738" lastFinishedPulling="2026-01-27 13:19:49.509406629 +0000 UTC m=+772.720020748" observedRunningTime="2026-01-27 13:19:49.789140541 +0000 UTC m=+772.999754660" watchObservedRunningTime="2026-01-27 13:19:50.028089781 +0000 UTC m=+773.238703900" Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.442881 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6d56q"] Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.443575 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovn-controller" containerID="cri-o://fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9" gracePeriod=30 Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.443743 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovn-acl-logging" containerID="cri-o://81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe" gracePeriod=30 Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.443754 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="kube-rbac-proxy-node" containerID="cri-o://0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43" gracePeriod=30 Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.443775 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="nbdb" containerID="cri-o://0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0" gracePeriod=30 Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.443859 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c" gracePeriod=30 Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.443916 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="sbdb" containerID="cri-o://244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284" gracePeriod=30 Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.443841 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="northd" containerID="cri-o://97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1" gracePeriod=30 Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.485818 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovnkube-controller" containerID="cri-o://e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e" gracePeriod=30 Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.786467 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9q6dk_a290f38c-b94c-4233-9d98-9a54a728cedb/kube-multus/2.log" Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.786956 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9q6dk_a290f38c-b94c-4233-9d98-9a54a728cedb/kube-multus/1.log" Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.787001 4786 generic.go:334] "Generic (PLEG): container finished" podID="a290f38c-b94c-4233-9d98-9a54a728cedb" containerID="22f3b0dc9f3dfb4b927b2423d15f1ec1295972f8ddb685dfc978ddd9f16c2ea4" exitCode=2 Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.787057 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9q6dk" event={"ID":"a290f38c-b94c-4233-9d98-9a54a728cedb","Type":"ContainerDied","Data":"22f3b0dc9f3dfb4b927b2423d15f1ec1295972f8ddb685dfc978ddd9f16c2ea4"} Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.787090 4786 scope.go:117] "RemoveContainer" containerID="e07ac0a78f8e6cfc9b374b179d250b45f9a86c225a033ed2c706b709a3007b7f" Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.787503 4786 scope.go:117] "RemoveContainer" containerID="22f3b0dc9f3dfb4b927b2423d15f1ec1295972f8ddb685dfc978ddd9f16c2ea4" Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.791426 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovnkube-controller/3.log" Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.793255 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovn-acl-logging/0.log" Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.793976 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovn-controller/0.log" Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.795025 4786 generic.go:334] "Generic (PLEG): container finished" podID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerID="e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e" exitCode=0 Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.795057 4786 generic.go:334] "Generic (PLEG): container finished" podID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerID="74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c" exitCode=0 Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.795067 4786 generic.go:334] "Generic (PLEG): container finished" podID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerID="0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43" exitCode=0 Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.795077 4786 generic.go:334] "Generic (PLEG): container finished" podID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerID="81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe" exitCode=143 Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.795086 4786 generic.go:334] "Generic (PLEG): container finished" podID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerID="fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9" exitCode=143 Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.795109 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerDied","Data":"e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e"} Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.795139 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerDied","Data":"74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c"} Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.795152 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerDied","Data":"0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43"} Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.795164 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerDied","Data":"81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe"} Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.795175 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerDied","Data":"fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9"} Jan 27 13:19:51 crc kubenswrapper[4786]: I0127 13:19:51.887744 4786 scope.go:117] "RemoveContainer" containerID="1a2a9faaefef939acbc751ed484c251a13489af87b8f9f705fa46c6dbfb003e5" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.293716 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovn-acl-logging/0.log" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.294219 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovn-controller/0.log" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.294594 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.353901 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2x4h6"] Jan 27 13:19:52 crc kubenswrapper[4786]: E0127 13:19:52.354106 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d928ff-1d55-488f-92a8-9f2e8efd62f8" containerName="extract" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354116 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d928ff-1d55-488f-92a8-9f2e8efd62f8" containerName="extract" Jan 27 13:19:52 crc kubenswrapper[4786]: E0127 13:19:52.354126 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d928ff-1d55-488f-92a8-9f2e8efd62f8" containerName="util" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354132 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d928ff-1d55-488f-92a8-9f2e8efd62f8" containerName="util" Jan 27 13:19:52 crc kubenswrapper[4786]: E0127 13:19:52.354141 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354148 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 13:19:52 crc kubenswrapper[4786]: E0127 13:19:52.354156 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovnkube-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354162 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovnkube-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: E0127 13:19:52.354170 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="sbdb" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354176 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="sbdb" Jan 27 13:19:52 crc kubenswrapper[4786]: E0127 13:19:52.354183 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovnkube-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354190 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovnkube-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: E0127 13:19:52.354196 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="kubecfg-setup" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354201 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="kubecfg-setup" Jan 27 13:19:52 crc kubenswrapper[4786]: E0127 13:19:52.354210 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovn-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354216 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovn-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: E0127 13:19:52.354226 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d928ff-1d55-488f-92a8-9f2e8efd62f8" containerName="pull" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354231 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d928ff-1d55-488f-92a8-9f2e8efd62f8" containerName="pull" Jan 27 13:19:52 crc kubenswrapper[4786]: E0127 13:19:52.354241 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovn-acl-logging" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354247 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovn-acl-logging" Jan 27 13:19:52 crc kubenswrapper[4786]: E0127 13:19:52.354255 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="northd" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354261 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="northd" Jan 27 13:19:52 crc kubenswrapper[4786]: E0127 13:19:52.354270 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovnkube-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354275 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovnkube-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: E0127 13:19:52.354284 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="nbdb" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354289 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="nbdb" Jan 27 13:19:52 crc kubenswrapper[4786]: E0127 13:19:52.354295 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="kube-rbac-proxy-node" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354300 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="kube-rbac-proxy-node" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354397 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovnkube-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354404 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovnkube-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354411 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="sbdb" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354421 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovn-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354429 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovnkube-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354437 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovnkube-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354448 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovn-acl-logging" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354463 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="northd" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354472 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d928ff-1d55-488f-92a8-9f2e8efd62f8" containerName="extract" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354481 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="kube-rbac-proxy-node" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354488 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354497 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="nbdb" Jan 27 13:19:52 crc kubenswrapper[4786]: E0127 13:19:52.354633 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovnkube-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354644 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovnkube-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354771 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovnkube-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: E0127 13:19:52.354883 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovnkube-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.354891 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerName="ovnkube-controller" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.356565 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.361853 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-slash\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.361899 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-var-lib-openvswitch\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.361927 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-kubelet\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.361948 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-systemd-units\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.361971 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-cni-bin\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.361955 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-slash" (OuterVolumeSpecName: "host-slash") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.361995 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-log-socket\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362019 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-systemd\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362051 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-run-ovn-kubernetes\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362077 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovn-node-metrics-cert\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362112 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovnkube-script-lib\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362145 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-cni-netd\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362171 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-etc-openvswitch\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362195 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovnkube-config\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362217 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rgf8\" (UniqueName: \"kubernetes.io/projected/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-kube-api-access-5rgf8\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362250 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-ovn\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362272 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-env-overrides\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362297 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-node-log\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362329 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-run-netns\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362354 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-openvswitch\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362384 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\" (UID: \"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd\") " Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362531 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362557 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-run-openvswitch\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362581 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-log-socket\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362609 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-var-lib-openvswitch\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362648 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-etc-openvswitch\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362684 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-node-log\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362706 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-systemd-units\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362723 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-run-ovn\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362018 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362074 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362100 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-log-socket" (OuterVolumeSpecName: "log-socket") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362115 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362151 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362175 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362749 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fea4443f-548b-4585-9b05-3ef9f1e126d0-ovn-node-metrics-cert\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362932 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-kubelet\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.362993 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-run-netns\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363025 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-cni-netd\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363252 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fea4443f-548b-4585-9b05-3ef9f1e126d0-ovnkube-config\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363283 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fea4443f-548b-4585-9b05-3ef9f1e126d0-ovnkube-script-lib\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363335 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-run-ovn-kubernetes\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363359 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lrjv\" (UniqueName: \"kubernetes.io/projected/fea4443f-548b-4585-9b05-3ef9f1e126d0-kube-api-access-7lrjv\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363382 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fea4443f-548b-4585-9b05-3ef9f1e126d0-env-overrides\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363419 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-cni-bin\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363470 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-slash\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363497 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-run-systemd\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363490 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363578 4786 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363594 4786 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363613 4786 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363643 4786 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363655 4786 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363666 4786 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.363678 4786 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.364092 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.364430 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.364485 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.364522 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-node-log" (OuterVolumeSpecName: "node-log") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.364540 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.364549 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.364635 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.364677 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.364938 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.370079 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-kube-api-access-5rgf8" (OuterVolumeSpecName: "kube-api-access-5rgf8") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "kube-api-access-5rgf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.371029 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.377519 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" (UID: "ad21a31d-efbf-4c10-b3d1-0f6cf71793bd"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464192 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-node-log\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464246 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-systemd-units\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464271 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-run-ovn\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464294 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fea4443f-548b-4585-9b05-3ef9f1e126d0-ovn-node-metrics-cert\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464310 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-kubelet\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464320 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-node-log\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464327 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-systemd-units\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464354 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-run-netns\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464331 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-run-netns\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464381 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-cni-netd\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464382 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-run-ovn\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464382 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-kubelet\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464398 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fea4443f-548b-4585-9b05-3ef9f1e126d0-ovnkube-config\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464423 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fea4443f-548b-4585-9b05-3ef9f1e126d0-ovnkube-script-lib\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464446 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-run-ovn-kubernetes\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464467 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lrjv\" (UniqueName: \"kubernetes.io/projected/fea4443f-548b-4585-9b05-3ef9f1e126d0-kube-api-access-7lrjv\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464488 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fea4443f-548b-4585-9b05-3ef9f1e126d0-env-overrides\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464513 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-cni-bin\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464505 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-cni-netd\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464565 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-slash\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464545 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-slash\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464593 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-run-ovn-kubernetes\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464650 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-run-systemd\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464704 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464738 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-run-openvswitch\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464774 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-log-socket\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464806 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-var-lib-openvswitch\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.464840 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-etc-openvswitch\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465014 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-cni-bin\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465335 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fea4443f-548b-4585-9b05-3ef9f1e126d0-ovnkube-config\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465363 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465369 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-log-socket\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465378 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-etc-openvswitch\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465389 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fea4443f-548b-4585-9b05-3ef9f1e126d0-ovnkube-script-lib\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465381 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-var-lib-openvswitch\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465393 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-run-openvswitch\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465426 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fea4443f-548b-4585-9b05-3ef9f1e126d0-run-systemd\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465413 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rgf8\" (UniqueName: \"kubernetes.io/projected/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-kube-api-access-5rgf8\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465461 4786 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465472 4786 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465481 4786 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465487 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fea4443f-548b-4585-9b05-3ef9f1e126d0-env-overrides\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465492 4786 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465532 4786 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465546 4786 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465559 4786 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465570 4786 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465580 4786 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465590 4786 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465598 4786 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.465611 4786 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.470219 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fea4443f-548b-4585-9b05-3ef9f1e126d0-ovn-node-metrics-cert\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.486894 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lrjv\" (UniqueName: \"kubernetes.io/projected/fea4443f-548b-4585-9b05-3ef9f1e126d0-kube-api-access-7lrjv\") pod \"ovnkube-node-2x4h6\" (UID: \"fea4443f-548b-4585-9b05-3ef9f1e126d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.708048 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:19:52 crc kubenswrapper[4786]: W0127 13:19:52.733860 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfea4443f_548b_4585_9b05_3ef9f1e126d0.slice/crio-88e54009be9a2f38b36a6d069d01e95e101fe014e517451685f42b993d5dcadb WatchSource:0}: Error finding container 88e54009be9a2f38b36a6d069d01e95e101fe014e517451685f42b993d5dcadb: Status 404 returned error can't find the container with id 88e54009be9a2f38b36a6d069d01e95e101fe014e517451685f42b993d5dcadb Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.802142 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" event={"ID":"fea4443f-548b-4585-9b05-3ef9f1e126d0","Type":"ContainerStarted","Data":"88e54009be9a2f38b36a6d069d01e95e101fe014e517451685f42b993d5dcadb"} Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.804182 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9q6dk_a290f38c-b94c-4233-9d98-9a54a728cedb/kube-multus/2.log" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.804244 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9q6dk" event={"ID":"a290f38c-b94c-4233-9d98-9a54a728cedb","Type":"ContainerStarted","Data":"842eb7f20283bf12fff4958bf091352d1e461d7c88476352d8158df8896e593b"} Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.807906 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovn-acl-logging/0.log" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.808403 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6d56q_ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/ovn-controller/0.log" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.808842 4786 generic.go:334] "Generic (PLEG): container finished" podID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerID="244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284" exitCode=0 Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.808868 4786 generic.go:334] "Generic (PLEG): container finished" podID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerID="0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0" exitCode=0 Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.808877 4786 generic.go:334] "Generic (PLEG): container finished" podID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" containerID="97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1" exitCode=0 Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.808907 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerDied","Data":"244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284"} Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.808937 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerDied","Data":"0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0"} Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.808953 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerDied","Data":"97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1"} Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.808965 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" event={"ID":"ad21a31d-efbf-4c10-b3d1-0f6cf71793bd","Type":"ContainerDied","Data":"05a861605dad0e44cd137e3dcb8abd841b6ef4f225479e130cbb7feea7399bd8"} Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.808979 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6d56q" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.808984 4786 scope.go:117] "RemoveContainer" containerID="e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.829780 4786 scope.go:117] "RemoveContainer" containerID="244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.857791 4786 scope.go:117] "RemoveContainer" containerID="0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.886685 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6d56q"] Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.887870 4786 scope.go:117] "RemoveContainer" containerID="97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.888845 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6d56q"] Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.915632 4786 scope.go:117] "RemoveContainer" containerID="74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.970923 4786 scope.go:117] "RemoveContainer" containerID="0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.981657 4786 scope.go:117] "RemoveContainer" containerID="81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe" Jan 27 13:19:52 crc kubenswrapper[4786]: I0127 13:19:52.993542 4786 scope.go:117] "RemoveContainer" containerID="fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.004744 4786 scope.go:117] "RemoveContainer" containerID="6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.016402 4786 scope.go:117] "RemoveContainer" containerID="e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e" Jan 27 13:19:53 crc kubenswrapper[4786]: E0127 13:19:53.017151 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e\": container with ID starting with e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e not found: ID does not exist" containerID="e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.017194 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e"} err="failed to get container status \"e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e\": rpc error: code = NotFound desc = could not find container \"e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e\": container with ID starting with e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.017224 4786 scope.go:117] "RemoveContainer" containerID="244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284" Jan 27 13:19:53 crc kubenswrapper[4786]: E0127 13:19:53.017642 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\": container with ID starting with 244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284 not found: ID does not exist" containerID="244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.017692 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284"} err="failed to get container status \"244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\": rpc error: code = NotFound desc = could not find container \"244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\": container with ID starting with 244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284 not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.017723 4786 scope.go:117] "RemoveContainer" containerID="0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0" Jan 27 13:19:53 crc kubenswrapper[4786]: E0127 13:19:53.017985 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\": container with ID starting with 0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0 not found: ID does not exist" containerID="0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.018010 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0"} err="failed to get container status \"0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\": rpc error: code = NotFound desc = could not find container \"0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\": container with ID starting with 0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0 not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.018024 4786 scope.go:117] "RemoveContainer" containerID="97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1" Jan 27 13:19:53 crc kubenswrapper[4786]: E0127 13:19:53.018294 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\": container with ID starting with 97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1 not found: ID does not exist" containerID="97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.018343 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1"} err="failed to get container status \"97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\": rpc error: code = NotFound desc = could not find container \"97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\": container with ID starting with 97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1 not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.018357 4786 scope.go:117] "RemoveContainer" containerID="74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c" Jan 27 13:19:53 crc kubenswrapper[4786]: E0127 13:19:53.018519 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\": container with ID starting with 74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c not found: ID does not exist" containerID="74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.018533 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c"} err="failed to get container status \"74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\": rpc error: code = NotFound desc = could not find container \"74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\": container with ID starting with 74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.018547 4786 scope.go:117] "RemoveContainer" containerID="0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43" Jan 27 13:19:53 crc kubenswrapper[4786]: E0127 13:19:53.018708 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\": container with ID starting with 0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43 not found: ID does not exist" containerID="0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.018727 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43"} err="failed to get container status \"0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\": rpc error: code = NotFound desc = could not find container \"0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\": container with ID starting with 0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43 not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.018738 4786 scope.go:117] "RemoveContainer" containerID="81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe" Jan 27 13:19:53 crc kubenswrapper[4786]: E0127 13:19:53.018883 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\": container with ID starting with 81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe not found: ID does not exist" containerID="81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.018902 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe"} err="failed to get container status \"81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\": rpc error: code = NotFound desc = could not find container \"81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\": container with ID starting with 81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.018914 4786 scope.go:117] "RemoveContainer" containerID="fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9" Jan 27 13:19:53 crc kubenswrapper[4786]: E0127 13:19:53.019082 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\": container with ID starting with fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9 not found: ID does not exist" containerID="fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.019099 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9"} err="failed to get container status \"fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\": rpc error: code = NotFound desc = could not find container \"fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\": container with ID starting with fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9 not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.019111 4786 scope.go:117] "RemoveContainer" containerID="6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f" Jan 27 13:19:53 crc kubenswrapper[4786]: E0127 13:19:53.019253 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\": container with ID starting with 6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f not found: ID does not exist" containerID="6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.019270 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f"} err="failed to get container status \"6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\": rpc error: code = NotFound desc = could not find container \"6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\": container with ID starting with 6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.019283 4786 scope.go:117] "RemoveContainer" containerID="e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.019437 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e"} err="failed to get container status \"e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e\": rpc error: code = NotFound desc = could not find container \"e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e\": container with ID starting with e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.019456 4786 scope.go:117] "RemoveContainer" containerID="244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.019605 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284"} err="failed to get container status \"244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\": rpc error: code = NotFound desc = could not find container \"244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\": container with ID starting with 244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284 not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.019702 4786 scope.go:117] "RemoveContainer" containerID="0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.019880 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0"} err="failed to get container status \"0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\": rpc error: code = NotFound desc = could not find container \"0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\": container with ID starting with 0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0 not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.019901 4786 scope.go:117] "RemoveContainer" containerID="97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.020066 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1"} err="failed to get container status \"97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\": rpc error: code = NotFound desc = could not find container \"97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\": container with ID starting with 97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1 not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.020093 4786 scope.go:117] "RemoveContainer" containerID="74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.020306 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c"} err="failed to get container status \"74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\": rpc error: code = NotFound desc = could not find container \"74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\": container with ID starting with 74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.020330 4786 scope.go:117] "RemoveContainer" containerID="0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.020531 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43"} err="failed to get container status \"0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\": rpc error: code = NotFound desc = could not find container \"0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\": container with ID starting with 0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43 not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.020556 4786 scope.go:117] "RemoveContainer" containerID="81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.020752 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe"} err="failed to get container status \"81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\": rpc error: code = NotFound desc = could not find container \"81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\": container with ID starting with 81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.020769 4786 scope.go:117] "RemoveContainer" containerID="fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.020923 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9"} err="failed to get container status \"fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\": rpc error: code = NotFound desc = could not find container \"fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\": container with ID starting with fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9 not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.020938 4786 scope.go:117] "RemoveContainer" containerID="6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.021125 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f"} err="failed to get container status \"6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\": rpc error: code = NotFound desc = could not find container \"6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\": container with ID starting with 6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.021149 4786 scope.go:117] "RemoveContainer" containerID="e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.021354 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e"} err="failed to get container status \"e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e\": rpc error: code = NotFound desc = could not find container \"e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e\": container with ID starting with e7213dd9c86e4a91a6d918e3ce21d05931466c84940aa2d70a3e07fe480b4c7e not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.021375 4786 scope.go:117] "RemoveContainer" containerID="244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.021587 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284"} err="failed to get container status \"244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\": rpc error: code = NotFound desc = could not find container \"244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284\": container with ID starting with 244409735dc75c880876ff57d57873a03826f0c839d24d8af9513aea965e2284 not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.021644 4786 scope.go:117] "RemoveContainer" containerID="0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.021820 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0"} err="failed to get container status \"0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\": rpc error: code = NotFound desc = could not find container \"0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0\": container with ID starting with 0189c9e02426035101961d0fd984d6443fbe489cb93e4ca9d15408ab6fd322b0 not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.021840 4786 scope.go:117] "RemoveContainer" containerID="97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.022021 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1"} err="failed to get container status \"97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\": rpc error: code = NotFound desc = could not find container \"97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1\": container with ID starting with 97a5ace4d80537a1c54c5e811772f8896a99659232fe5c90e38d16af66e633c1 not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.022044 4786 scope.go:117] "RemoveContainer" containerID="74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.022211 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c"} err="failed to get container status \"74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\": rpc error: code = NotFound desc = could not find container \"74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c\": container with ID starting with 74618e12d81a046bfaae3d2fcec6c781f49a8eae68bda07b6c481df34af8fd5c not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.022235 4786 scope.go:117] "RemoveContainer" containerID="0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.022448 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43"} err="failed to get container status \"0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\": rpc error: code = NotFound desc = could not find container \"0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43\": container with ID starting with 0bcfa073bac8049946701d1b8c443e0619841e522b9a1408e242fe3cdc58db43 not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.022463 4786 scope.go:117] "RemoveContainer" containerID="81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.022763 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe"} err="failed to get container status \"81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\": rpc error: code = NotFound desc = could not find container \"81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe\": container with ID starting with 81311c1beaa361da3833deb2502503d153a4a9f24f99e034e7f1a5f8be7cf8fe not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.022781 4786 scope.go:117] "RemoveContainer" containerID="fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.022945 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9"} err="failed to get container status \"fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\": rpc error: code = NotFound desc = could not find container \"fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9\": container with ID starting with fda7afe1ac919351ea21d6de3cd5ff2382e494a201f39c38518ab8985519a0c9 not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.022972 4786 scope.go:117] "RemoveContainer" containerID="6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.023129 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f"} err="failed to get container status \"6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\": rpc error: code = NotFound desc = could not find container \"6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f\": container with ID starting with 6d8644af47d5227316c61121fd96faf97310d985483100bb1378ccfeb953ef4f not found: ID does not exist" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.223887 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jzcj9"] Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.224737 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.227865 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-9swd9" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.228189 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.228379 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.374696 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56jxx\" (UniqueName: \"kubernetes.io/projected/f81e4a8e-6374-4da8-a409-624cabf87029-kube-api-access-56jxx\") pod \"nmstate-operator-646758c888-jzcj9\" (UID: \"f81e4a8e-6374-4da8-a409-624cabf87029\") " pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.470210 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad21a31d-efbf-4c10-b3d1-0f6cf71793bd" path="/var/lib/kubelet/pods/ad21a31d-efbf-4c10-b3d1-0f6cf71793bd/volumes" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.475586 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56jxx\" (UniqueName: \"kubernetes.io/projected/f81e4a8e-6374-4da8-a409-624cabf87029-kube-api-access-56jxx\") pod \"nmstate-operator-646758c888-jzcj9\" (UID: \"f81e4a8e-6374-4da8-a409-624cabf87029\") " pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.499066 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56jxx\" (UniqueName: \"kubernetes.io/projected/f81e4a8e-6374-4da8-a409-624cabf87029-kube-api-access-56jxx\") pod \"nmstate-operator-646758c888-jzcj9\" (UID: \"f81e4a8e-6374-4da8-a409-624cabf87029\") " pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.539805 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" Jan 27 13:19:53 crc kubenswrapper[4786]: E0127 13:19:53.565379 4786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-jzcj9_openshift-nmstate_f81e4a8e-6374-4da8-a409-624cabf87029_0(30f092d13238ef9fd295afa3a9b4d6e3cbf74d4d335f422778676482b7de237b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 13:19:53 crc kubenswrapper[4786]: E0127 13:19:53.565444 4786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-jzcj9_openshift-nmstate_f81e4a8e-6374-4da8-a409-624cabf87029_0(30f092d13238ef9fd295afa3a9b4d6e3cbf74d4d335f422778676482b7de237b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" Jan 27 13:19:53 crc kubenswrapper[4786]: E0127 13:19:53.565468 4786 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-jzcj9_openshift-nmstate_f81e4a8e-6374-4da8-a409-624cabf87029_0(30f092d13238ef9fd295afa3a9b4d6e3cbf74d4d335f422778676482b7de237b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" Jan 27 13:19:53 crc kubenswrapper[4786]: E0127 13:19:53.565511 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-646758c888-jzcj9_openshift-nmstate(f81e4a8e-6374-4da8-a409-624cabf87029)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-646758c888-jzcj9_openshift-nmstate(f81e4a8e-6374-4da8-a409-624cabf87029)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-jzcj9_openshift-nmstate_f81e4a8e-6374-4da8-a409-624cabf87029_0(30f092d13238ef9fd295afa3a9b4d6e3cbf74d4d335f422778676482b7de237b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" podUID="f81e4a8e-6374-4da8-a409-624cabf87029" Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.815078 4786 generic.go:334] "Generic (PLEG): container finished" podID="fea4443f-548b-4585-9b05-3ef9f1e126d0" containerID="b60899b1a714201d799682d23f851b9d1df5c86da9e68f3aca66a73664dd9269" exitCode=0 Jan 27 13:19:53 crc kubenswrapper[4786]: I0127 13:19:53.815184 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" event={"ID":"fea4443f-548b-4585-9b05-3ef9f1e126d0","Type":"ContainerDied","Data":"b60899b1a714201d799682d23f851b9d1df5c86da9e68f3aca66a73664dd9269"} Jan 27 13:19:54 crc kubenswrapper[4786]: I0127 13:19:54.823324 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" event={"ID":"fea4443f-548b-4585-9b05-3ef9f1e126d0","Type":"ContainerStarted","Data":"560414135a82d0f3d329d853163a25f278c9ec5ae13a9c43c0ac4184f83c0cf4"} Jan 27 13:19:54 crc kubenswrapper[4786]: I0127 13:19:54.823642 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" event={"ID":"fea4443f-548b-4585-9b05-3ef9f1e126d0","Type":"ContainerStarted","Data":"5c8e398d55b2fe906ad0f820802b63639751715bb0e8ba90ea3b3c1fe32b8c50"} Jan 27 13:19:54 crc kubenswrapper[4786]: I0127 13:19:54.823656 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" event={"ID":"fea4443f-548b-4585-9b05-3ef9f1e126d0","Type":"ContainerStarted","Data":"dd1a8ece9aaf57f3c6313c8866a7a44ce3f807f2e73e8b65ee40cdd168b94167"} Jan 27 13:19:54 crc kubenswrapper[4786]: I0127 13:19:54.823674 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" event={"ID":"fea4443f-548b-4585-9b05-3ef9f1e126d0","Type":"ContainerStarted","Data":"9d84785ef50051593da67c2b9ed620bc7a7fe44bfb687514f5d2eab6a5cc7c8f"} Jan 27 13:19:54 crc kubenswrapper[4786]: I0127 13:19:54.836131 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:19:54 crc kubenswrapper[4786]: I0127 13:19:54.836265 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:19:55 crc kubenswrapper[4786]: I0127 13:19:55.831581 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" event={"ID":"fea4443f-548b-4585-9b05-3ef9f1e126d0","Type":"ContainerStarted","Data":"d0a7515f86e2d009874824d2eebcbd6a0da694c0df70108e304af58f55344e84"} Jan 27 13:19:55 crc kubenswrapper[4786]: I0127 13:19:55.886431 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x985n" podUID="5009e0b3-cd7c-4080-9013-3749bcd40e4c" containerName="registry-server" probeResult="failure" output=< Jan 27 13:19:55 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Jan 27 13:19:55 crc kubenswrapper[4786]: > Jan 27 13:19:57 crc kubenswrapper[4786]: I0127 13:19:57.842770 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" event={"ID":"fea4443f-548b-4585-9b05-3ef9f1e126d0","Type":"ContainerStarted","Data":"b8f50acec395635efdb71e8d510304bc402b3007b7d35075b4c78af12c756291"} Jan 27 13:20:01 crc kubenswrapper[4786]: I0127 13:20:01.867667 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" event={"ID":"fea4443f-548b-4585-9b05-3ef9f1e126d0","Type":"ContainerStarted","Data":"65f2f1d264d2b6683dc4aa50f538428cb3995162b57e4295a638cebf198ba1bb"} Jan 27 13:20:03 crc kubenswrapper[4786]: I0127 13:20:03.882937 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" event={"ID":"fea4443f-548b-4585-9b05-3ef9f1e126d0","Type":"ContainerStarted","Data":"aef46e82539b9fbf052d85bb7c8625dbf51d5c8a2dc0356b492d33c3da746262"} Jan 27 13:20:03 crc kubenswrapper[4786]: I0127 13:20:03.883225 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:20:03 crc kubenswrapper[4786]: I0127 13:20:03.883236 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:20:03 crc kubenswrapper[4786]: I0127 13:20:03.911720 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" podStartSLOduration=11.911701055 podStartE2EDuration="11.911701055s" podCreationTimestamp="2026-01-27 13:19:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:20:03.909253828 +0000 UTC m=+787.119867947" watchObservedRunningTime="2026-01-27 13:20:03.911701055 +0000 UTC m=+787.122315174" Jan 27 13:20:03 crc kubenswrapper[4786]: I0127 13:20:03.917161 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:20:04 crc kubenswrapper[4786]: I0127 13:20:04.327967 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jzcj9"] Jan 27 13:20:04 crc kubenswrapper[4786]: I0127 13:20:04.328340 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" Jan 27 13:20:04 crc kubenswrapper[4786]: I0127 13:20:04.328800 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" Jan 27 13:20:04 crc kubenswrapper[4786]: E0127 13:20:04.434881 4786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-jzcj9_openshift-nmstate_f81e4a8e-6374-4da8-a409-624cabf87029_0(32a93eab7306586520272cc09b91f0ae76e0e96f7616df06d9dd15ed03639b55): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 13:20:04 crc kubenswrapper[4786]: E0127 13:20:04.435211 4786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-jzcj9_openshift-nmstate_f81e4a8e-6374-4da8-a409-624cabf87029_0(32a93eab7306586520272cc09b91f0ae76e0e96f7616df06d9dd15ed03639b55): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" Jan 27 13:20:04 crc kubenswrapper[4786]: E0127 13:20:04.435233 4786 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-jzcj9_openshift-nmstate_f81e4a8e-6374-4da8-a409-624cabf87029_0(32a93eab7306586520272cc09b91f0ae76e0e96f7616df06d9dd15ed03639b55): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" Jan 27 13:20:04 crc kubenswrapper[4786]: E0127 13:20:04.435281 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-646758c888-jzcj9_openshift-nmstate(f81e4a8e-6374-4da8-a409-624cabf87029)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-646758c888-jzcj9_openshift-nmstate(f81e4a8e-6374-4da8-a409-624cabf87029)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-jzcj9_openshift-nmstate_f81e4a8e-6374-4da8-a409-624cabf87029_0(32a93eab7306586520272cc09b91f0ae76e0e96f7616df06d9dd15ed03639b55): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" podUID="f81e4a8e-6374-4da8-a409-624cabf87029" Jan 27 13:20:04 crc kubenswrapper[4786]: I0127 13:20:04.874483 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:20:04 crc kubenswrapper[4786]: I0127 13:20:04.887769 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:20:04 crc kubenswrapper[4786]: I0127 13:20:04.910252 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:20:04 crc kubenswrapper[4786]: I0127 13:20:04.912219 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:20:05 crc kubenswrapper[4786]: I0127 13:20:05.102827 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x985n"] Jan 27 13:20:05 crc kubenswrapper[4786]: I0127 13:20:05.892539 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x985n" podUID="5009e0b3-cd7c-4080-9013-3749bcd40e4c" containerName="registry-server" containerID="cri-o://49c00db2b0316648f133117345c2df5fc64a0fe39449b13a1bf5616794897926" gracePeriod=2 Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.594943 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.658990 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5009e0b3-cd7c-4080-9013-3749bcd40e4c-catalog-content\") pod \"5009e0b3-cd7c-4080-9013-3749bcd40e4c\" (UID: \"5009e0b3-cd7c-4080-9013-3749bcd40e4c\") " Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.659085 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hlp6\" (UniqueName: \"kubernetes.io/projected/5009e0b3-cd7c-4080-9013-3749bcd40e4c-kube-api-access-6hlp6\") pod \"5009e0b3-cd7c-4080-9013-3749bcd40e4c\" (UID: \"5009e0b3-cd7c-4080-9013-3749bcd40e4c\") " Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.659104 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5009e0b3-cd7c-4080-9013-3749bcd40e4c-utilities\") pod \"5009e0b3-cd7c-4080-9013-3749bcd40e4c\" (UID: \"5009e0b3-cd7c-4080-9013-3749bcd40e4c\") " Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.662316 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5009e0b3-cd7c-4080-9013-3749bcd40e4c-utilities" (OuterVolumeSpecName: "utilities") pod "5009e0b3-cd7c-4080-9013-3749bcd40e4c" (UID: "5009e0b3-cd7c-4080-9013-3749bcd40e4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.670012 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5009e0b3-cd7c-4080-9013-3749bcd40e4c-kube-api-access-6hlp6" (OuterVolumeSpecName: "kube-api-access-6hlp6") pod "5009e0b3-cd7c-4080-9013-3749bcd40e4c" (UID: "5009e0b3-cd7c-4080-9013-3749bcd40e4c"). InnerVolumeSpecName "kube-api-access-6hlp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.761206 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hlp6\" (UniqueName: \"kubernetes.io/projected/5009e0b3-cd7c-4080-9013-3749bcd40e4c-kube-api-access-6hlp6\") on node \"crc\" DevicePath \"\"" Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.761270 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5009e0b3-cd7c-4080-9013-3749bcd40e4c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.769128 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5009e0b3-cd7c-4080-9013-3749bcd40e4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5009e0b3-cd7c-4080-9013-3749bcd40e4c" (UID: "5009e0b3-cd7c-4080-9013-3749bcd40e4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.862416 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5009e0b3-cd7c-4080-9013-3749bcd40e4c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.902156 4786 generic.go:334] "Generic (PLEG): container finished" podID="5009e0b3-cd7c-4080-9013-3749bcd40e4c" containerID="49c00db2b0316648f133117345c2df5fc64a0fe39449b13a1bf5616794897926" exitCode=0 Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.902231 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x985n" event={"ID":"5009e0b3-cd7c-4080-9013-3749bcd40e4c","Type":"ContainerDied","Data":"49c00db2b0316648f133117345c2df5fc64a0fe39449b13a1bf5616794897926"} Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.902312 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x985n" event={"ID":"5009e0b3-cd7c-4080-9013-3749bcd40e4c","Type":"ContainerDied","Data":"2c142140ba3210f351e31136c20f0270e5520645e1de6f0885138474a9f9b68f"} Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.902330 4786 scope.go:117] "RemoveContainer" containerID="49c00db2b0316648f133117345c2df5fc64a0fe39449b13a1bf5616794897926" Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.902250 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x985n" Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.920414 4786 scope.go:117] "RemoveContainer" containerID="d8bd5f9d5bfa0059779d5b5b83742196ecb4bcc33a6d296f2b9face4062d7db1" Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.932962 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x985n"] Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.939147 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x985n"] Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.966319 4786 scope.go:117] "RemoveContainer" containerID="e889bd53ea6fa169b8f2f3cbbe15901dade0861cbbf7b6100e4f35bece3f4af6" Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.984511 4786 scope.go:117] "RemoveContainer" containerID="49c00db2b0316648f133117345c2df5fc64a0fe39449b13a1bf5616794897926" Jan 27 13:20:06 crc kubenswrapper[4786]: E0127 13:20:06.985117 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49c00db2b0316648f133117345c2df5fc64a0fe39449b13a1bf5616794897926\": container with ID starting with 49c00db2b0316648f133117345c2df5fc64a0fe39449b13a1bf5616794897926 not found: ID does not exist" containerID="49c00db2b0316648f133117345c2df5fc64a0fe39449b13a1bf5616794897926" Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.985146 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c00db2b0316648f133117345c2df5fc64a0fe39449b13a1bf5616794897926"} err="failed to get container status \"49c00db2b0316648f133117345c2df5fc64a0fe39449b13a1bf5616794897926\": rpc error: code = NotFound desc = could not find container \"49c00db2b0316648f133117345c2df5fc64a0fe39449b13a1bf5616794897926\": container with ID starting with 49c00db2b0316648f133117345c2df5fc64a0fe39449b13a1bf5616794897926 not found: ID does not exist" Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.985174 4786 scope.go:117] "RemoveContainer" containerID="d8bd5f9d5bfa0059779d5b5b83742196ecb4bcc33a6d296f2b9face4062d7db1" Jan 27 13:20:06 crc kubenswrapper[4786]: E0127 13:20:06.985463 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8bd5f9d5bfa0059779d5b5b83742196ecb4bcc33a6d296f2b9face4062d7db1\": container with ID starting with d8bd5f9d5bfa0059779d5b5b83742196ecb4bcc33a6d296f2b9face4062d7db1 not found: ID does not exist" containerID="d8bd5f9d5bfa0059779d5b5b83742196ecb4bcc33a6d296f2b9face4062d7db1" Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.985509 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8bd5f9d5bfa0059779d5b5b83742196ecb4bcc33a6d296f2b9face4062d7db1"} err="failed to get container status \"d8bd5f9d5bfa0059779d5b5b83742196ecb4bcc33a6d296f2b9face4062d7db1\": rpc error: code = NotFound desc = could not find container \"d8bd5f9d5bfa0059779d5b5b83742196ecb4bcc33a6d296f2b9face4062d7db1\": container with ID starting with d8bd5f9d5bfa0059779d5b5b83742196ecb4bcc33a6d296f2b9face4062d7db1 not found: ID does not exist" Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.985539 4786 scope.go:117] "RemoveContainer" containerID="e889bd53ea6fa169b8f2f3cbbe15901dade0861cbbf7b6100e4f35bece3f4af6" Jan 27 13:20:06 crc kubenswrapper[4786]: E0127 13:20:06.986283 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e889bd53ea6fa169b8f2f3cbbe15901dade0861cbbf7b6100e4f35bece3f4af6\": container with ID starting with e889bd53ea6fa169b8f2f3cbbe15901dade0861cbbf7b6100e4f35bece3f4af6 not found: ID does not exist" containerID="e889bd53ea6fa169b8f2f3cbbe15901dade0861cbbf7b6100e4f35bece3f4af6" Jan 27 13:20:06 crc kubenswrapper[4786]: I0127 13:20:06.986310 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e889bd53ea6fa169b8f2f3cbbe15901dade0861cbbf7b6100e4f35bece3f4af6"} err="failed to get container status \"e889bd53ea6fa169b8f2f3cbbe15901dade0861cbbf7b6100e4f35bece3f4af6\": rpc error: code = NotFound desc = could not find container \"e889bd53ea6fa169b8f2f3cbbe15901dade0861cbbf7b6100e4f35bece3f4af6\": container with ID starting with e889bd53ea6fa169b8f2f3cbbe15901dade0861cbbf7b6100e4f35bece3f4af6 not found: ID does not exist" Jan 27 13:20:07 crc kubenswrapper[4786]: I0127 13:20:07.470790 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5009e0b3-cd7c-4080-9013-3749bcd40e4c" path="/var/lib/kubelet/pods/5009e0b3-cd7c-4080-9013-3749bcd40e4c/volumes" Jan 27 13:20:09 crc kubenswrapper[4786]: I0127 13:20:09.533295 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:20:09 crc kubenswrapper[4786]: I0127 13:20:09.534238 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:20:09 crc kubenswrapper[4786]: I0127 13:20:09.534336 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:20:09 crc kubenswrapper[4786]: I0127 13:20:09.534960 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4b187fcf6836625d29a0b0273148285cc0698ee8c6ac736d4d724e9062a8e84"} pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:20:09 crc kubenswrapper[4786]: I0127 13:20:09.535097 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" containerID="cri-o://b4b187fcf6836625d29a0b0273148285cc0698ee8c6ac736d4d724e9062a8e84" gracePeriod=600 Jan 27 13:20:10 crc kubenswrapper[4786]: I0127 13:20:10.929924 4786 generic.go:334] "Generic (PLEG): container finished" podID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerID="b4b187fcf6836625d29a0b0273148285cc0698ee8c6ac736d4d724e9062a8e84" exitCode=0 Jan 27 13:20:10 crc kubenswrapper[4786]: I0127 13:20:10.930037 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerDied","Data":"b4b187fcf6836625d29a0b0273148285cc0698ee8c6ac736d4d724e9062a8e84"} Jan 27 13:20:10 crc kubenswrapper[4786]: I0127 13:20:10.930563 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerStarted","Data":"4d62cb08eff3ad117221cbb57b9b2f848974ad39a63a811cd7c2c3452ac8780c"} Jan 27 13:20:10 crc kubenswrapper[4786]: I0127 13:20:10.930660 4786 scope.go:117] "RemoveContainer" containerID="ba183c40aadd721fd57aabbb3b26c846c14e0b60f4d9d824803d67f76f6da170" Jan 27 13:20:16 crc kubenswrapper[4786]: I0127 13:20:16.463907 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" Jan 27 13:20:16 crc kubenswrapper[4786]: I0127 13:20:16.464540 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" Jan 27 13:20:16 crc kubenswrapper[4786]: I0127 13:20:16.643768 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jzcj9"] Jan 27 13:20:16 crc kubenswrapper[4786]: W0127 13:20:16.648051 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf81e4a8e_6374_4da8_a409_624cabf87029.slice/crio-07e9ce9f9cdd5dd217551f59c89131c845922ca339d082b79427286d6fcd8a7c WatchSource:0}: Error finding container 07e9ce9f9cdd5dd217551f59c89131c845922ca339d082b79427286d6fcd8a7c: Status 404 returned error can't find the container with id 07e9ce9f9cdd5dd217551f59c89131c845922ca339d082b79427286d6fcd8a7c Jan 27 13:20:16 crc kubenswrapper[4786]: I0127 13:20:16.964632 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" event={"ID":"f81e4a8e-6374-4da8-a409-624cabf87029","Type":"ContainerStarted","Data":"07e9ce9f9cdd5dd217551f59c89131c845922ca339d082b79427286d6fcd8a7c"} Jan 27 13:20:18 crc kubenswrapper[4786]: I0127 13:20:18.976890 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" event={"ID":"f81e4a8e-6374-4da8-a409-624cabf87029","Type":"ContainerStarted","Data":"e4ebba3b460ce017697e97e7535a6e5bbce48c561acd898462c683c9b4ba84ca"} Jan 27 13:20:18 crc kubenswrapper[4786]: I0127 13:20:18.994562 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-jzcj9" podStartSLOduration=23.817458569 podStartE2EDuration="25.994545316s" podCreationTimestamp="2026-01-27 13:19:53 +0000 UTC" firstStartedPulling="2026-01-27 13:20:16.650557832 +0000 UTC m=+799.861171961" lastFinishedPulling="2026-01-27 13:20:18.827644589 +0000 UTC m=+802.038258708" observedRunningTime="2026-01-27 13:20:18.991316408 +0000 UTC m=+802.201930537" watchObservedRunningTime="2026-01-27 13:20:18.994545316 +0000 UTC m=+802.205159435" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.025068 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rh4rm"] Jan 27 13:20:20 crc kubenswrapper[4786]: E0127 13:20:20.025272 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5009e0b3-cd7c-4080-9013-3749bcd40e4c" containerName="registry-server" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.025284 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5009e0b3-cd7c-4080-9013-3749bcd40e4c" containerName="registry-server" Jan 27 13:20:20 crc kubenswrapper[4786]: E0127 13:20:20.025302 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5009e0b3-cd7c-4080-9013-3749bcd40e4c" containerName="extract-content" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.025309 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5009e0b3-cd7c-4080-9013-3749bcd40e4c" containerName="extract-content" Jan 27 13:20:20 crc kubenswrapper[4786]: E0127 13:20:20.025319 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5009e0b3-cd7c-4080-9013-3749bcd40e4c" containerName="extract-utilities" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.025325 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5009e0b3-cd7c-4080-9013-3749bcd40e4c" containerName="extract-utilities" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.025418 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="5009e0b3-cd7c-4080-9013-3749bcd40e4c" containerName="registry-server" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.025982 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-rh4rm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.027881 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-pfzh9" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.045012 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs"] Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.045845 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.051002 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.060620 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rh4rm"] Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.064314 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-cjq4w"] Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.065316 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-cjq4w" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.071015 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs"] Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.124347 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/35fde08b-f604-431c-88a8-6fe254dc84aa-nmstate-lock\") pod \"nmstate-handler-cjq4w\" (UID: \"35fde08b-f604-431c-88a8-6fe254dc84aa\") " pod="openshift-nmstate/nmstate-handler-cjq4w" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.124431 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v4km\" (UniqueName: \"kubernetes.io/projected/19a395d7-a1ec-4c10-83ad-3f195b89fadd-kube-api-access-7v4km\") pod \"nmstate-metrics-54757c584b-rh4rm\" (UID: \"19a395d7-a1ec-4c10-83ad-3f195b89fadd\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rh4rm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.124462 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/35fde08b-f604-431c-88a8-6fe254dc84aa-dbus-socket\") pod \"nmstate-handler-cjq4w\" (UID: \"35fde08b-f604-431c-88a8-6fe254dc84aa\") " pod="openshift-nmstate/nmstate-handler-cjq4w" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.124494 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/59020701-49b7-412d-97a9-81ef6a905bb0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-nvbzs\" (UID: \"59020701-49b7-412d-97a9-81ef6a905bb0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.124527 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/35fde08b-f604-431c-88a8-6fe254dc84aa-ovs-socket\") pod \"nmstate-handler-cjq4w\" (UID: \"35fde08b-f604-431c-88a8-6fe254dc84aa\") " pod="openshift-nmstate/nmstate-handler-cjq4w" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.124551 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlkm7\" (UniqueName: \"kubernetes.io/projected/59020701-49b7-412d-97a9-81ef6a905bb0-kube-api-access-vlkm7\") pod \"nmstate-webhook-8474b5b9d8-nvbzs\" (UID: \"59020701-49b7-412d-97a9-81ef6a905bb0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.124573 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dxxr\" (UniqueName: \"kubernetes.io/projected/35fde08b-f604-431c-88a8-6fe254dc84aa-kube-api-access-7dxxr\") pod \"nmstate-handler-cjq4w\" (UID: \"35fde08b-f604-431c-88a8-6fe254dc84aa\") " pod="openshift-nmstate/nmstate-handler-cjq4w" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.161829 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf"] Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.162501 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.169169 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-jmw8r" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.169483 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.169510 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.173957 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf"] Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.226026 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/35fde08b-f604-431c-88a8-6fe254dc84aa-ovs-socket\") pod \"nmstate-handler-cjq4w\" (UID: \"35fde08b-f604-431c-88a8-6fe254dc84aa\") " pod="openshift-nmstate/nmstate-handler-cjq4w" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.226191 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t588d\" (UniqueName: \"kubernetes.io/projected/67f83441-1412-437c-9cb1-c38ee7b70182-kube-api-access-t588d\") pod \"nmstate-console-plugin-7754f76f8b-55mnf\" (UID: \"67f83441-1412-437c-9cb1-c38ee7b70182\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.226120 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/35fde08b-f604-431c-88a8-6fe254dc84aa-ovs-socket\") pod \"nmstate-handler-cjq4w\" (UID: \"35fde08b-f604-431c-88a8-6fe254dc84aa\") " pod="openshift-nmstate/nmstate-handler-cjq4w" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.226278 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlkm7\" (UniqueName: \"kubernetes.io/projected/59020701-49b7-412d-97a9-81ef6a905bb0-kube-api-access-vlkm7\") pod \"nmstate-webhook-8474b5b9d8-nvbzs\" (UID: \"59020701-49b7-412d-97a9-81ef6a905bb0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.226338 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dxxr\" (UniqueName: \"kubernetes.io/projected/35fde08b-f604-431c-88a8-6fe254dc84aa-kube-api-access-7dxxr\") pod \"nmstate-handler-cjq4w\" (UID: \"35fde08b-f604-431c-88a8-6fe254dc84aa\") " pod="openshift-nmstate/nmstate-handler-cjq4w" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.226369 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/67f83441-1412-437c-9cb1-c38ee7b70182-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-55mnf\" (UID: \"67f83441-1412-437c-9cb1-c38ee7b70182\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.226818 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/67f83441-1412-437c-9cb1-c38ee7b70182-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-55mnf\" (UID: \"67f83441-1412-437c-9cb1-c38ee7b70182\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.226853 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/35fde08b-f604-431c-88a8-6fe254dc84aa-nmstate-lock\") pod \"nmstate-handler-cjq4w\" (UID: \"35fde08b-f604-431c-88a8-6fe254dc84aa\") " pod="openshift-nmstate/nmstate-handler-cjq4w" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.226884 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v4km\" (UniqueName: \"kubernetes.io/projected/19a395d7-a1ec-4c10-83ad-3f195b89fadd-kube-api-access-7v4km\") pod \"nmstate-metrics-54757c584b-rh4rm\" (UID: \"19a395d7-a1ec-4c10-83ad-3f195b89fadd\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rh4rm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.226946 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/35fde08b-f604-431c-88a8-6fe254dc84aa-dbus-socket\") pod \"nmstate-handler-cjq4w\" (UID: \"35fde08b-f604-431c-88a8-6fe254dc84aa\") " pod="openshift-nmstate/nmstate-handler-cjq4w" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.226947 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/35fde08b-f604-431c-88a8-6fe254dc84aa-nmstate-lock\") pod \"nmstate-handler-cjq4w\" (UID: \"35fde08b-f604-431c-88a8-6fe254dc84aa\") " pod="openshift-nmstate/nmstate-handler-cjq4w" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.227182 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/59020701-49b7-412d-97a9-81ef6a905bb0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-nvbzs\" (UID: \"59020701-49b7-412d-97a9-81ef6a905bb0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.227280 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/35fde08b-f604-431c-88a8-6fe254dc84aa-dbus-socket\") pod \"nmstate-handler-cjq4w\" (UID: \"35fde08b-f604-431c-88a8-6fe254dc84aa\") " pod="openshift-nmstate/nmstate-handler-cjq4w" Jan 27 13:20:20 crc kubenswrapper[4786]: E0127 13:20:20.227306 4786 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 27 13:20:20 crc kubenswrapper[4786]: E0127 13:20:20.227369 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59020701-49b7-412d-97a9-81ef6a905bb0-tls-key-pair podName:59020701-49b7-412d-97a9-81ef6a905bb0 nodeName:}" failed. No retries permitted until 2026-01-27 13:20:20.727349865 +0000 UTC m=+803.937963984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/59020701-49b7-412d-97a9-81ef6a905bb0-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-nvbzs" (UID: "59020701-49b7-412d-97a9-81ef6a905bb0") : secret "openshift-nmstate-webhook" not found Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.247870 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlkm7\" (UniqueName: \"kubernetes.io/projected/59020701-49b7-412d-97a9-81ef6a905bb0-kube-api-access-vlkm7\") pod \"nmstate-webhook-8474b5b9d8-nvbzs\" (UID: \"59020701-49b7-412d-97a9-81ef6a905bb0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.248082 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dxxr\" (UniqueName: \"kubernetes.io/projected/35fde08b-f604-431c-88a8-6fe254dc84aa-kube-api-access-7dxxr\") pod \"nmstate-handler-cjq4w\" (UID: \"35fde08b-f604-431c-88a8-6fe254dc84aa\") " pod="openshift-nmstate/nmstate-handler-cjq4w" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.256517 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v4km\" (UniqueName: \"kubernetes.io/projected/19a395d7-a1ec-4c10-83ad-3f195b89fadd-kube-api-access-7v4km\") pod \"nmstate-metrics-54757c584b-rh4rm\" (UID: \"19a395d7-a1ec-4c10-83ad-3f195b89fadd\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-rh4rm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.328474 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t588d\" (UniqueName: \"kubernetes.io/projected/67f83441-1412-437c-9cb1-c38ee7b70182-kube-api-access-t588d\") pod \"nmstate-console-plugin-7754f76f8b-55mnf\" (UID: \"67f83441-1412-437c-9cb1-c38ee7b70182\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.328544 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/67f83441-1412-437c-9cb1-c38ee7b70182-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-55mnf\" (UID: \"67f83441-1412-437c-9cb1-c38ee7b70182\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.328571 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/67f83441-1412-437c-9cb1-c38ee7b70182-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-55mnf\" (UID: \"67f83441-1412-437c-9cb1-c38ee7b70182\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf" Jan 27 13:20:20 crc kubenswrapper[4786]: E0127 13:20:20.328724 4786 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 27 13:20:20 crc kubenswrapper[4786]: E0127 13:20:20.328771 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f83441-1412-437c-9cb1-c38ee7b70182-plugin-serving-cert podName:67f83441-1412-437c-9cb1-c38ee7b70182 nodeName:}" failed. No retries permitted until 2026-01-27 13:20:20.828756679 +0000 UTC m=+804.039370798 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/67f83441-1412-437c-9cb1-c38ee7b70182-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-55mnf" (UID: "67f83441-1412-437c-9cb1-c38ee7b70182") : secret "plugin-serving-cert" not found Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.329455 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/67f83441-1412-437c-9cb1-c38ee7b70182-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-55mnf\" (UID: \"67f83441-1412-437c-9cb1-c38ee7b70182\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.340292 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-rh4rm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.355067 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5c7595d455-5t5mm"] Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.355780 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.361842 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t588d\" (UniqueName: \"kubernetes.io/projected/67f83441-1412-437c-9cb1-c38ee7b70182-kube-api-access-t588d\") pod \"nmstate-console-plugin-7754f76f8b-55mnf\" (UID: \"67f83441-1412-437c-9cb1-c38ee7b70182\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.368303 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c7595d455-5t5mm"] Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.378200 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-cjq4w" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.429660 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d5b266a-2fae-4634-8528-f14ebd42c2f2-console-config\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.429747 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d5b266a-2fae-4634-8528-f14ebd42c2f2-oauth-serving-cert\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.429791 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d5b266a-2fae-4634-8528-f14ebd42c2f2-trusted-ca-bundle\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.429822 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcrlm\" (UniqueName: \"kubernetes.io/projected/1d5b266a-2fae-4634-8528-f14ebd42c2f2-kube-api-access-wcrlm\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.429849 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d5b266a-2fae-4634-8528-f14ebd42c2f2-service-ca\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.429874 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d5b266a-2fae-4634-8528-f14ebd42c2f2-console-serving-cert\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.429914 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d5b266a-2fae-4634-8528-f14ebd42c2f2-console-oauth-config\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.531295 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d5b266a-2fae-4634-8528-f14ebd42c2f2-oauth-serving-cert\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.531995 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d5b266a-2fae-4634-8528-f14ebd42c2f2-trusted-ca-bundle\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.532052 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcrlm\" (UniqueName: \"kubernetes.io/projected/1d5b266a-2fae-4634-8528-f14ebd42c2f2-kube-api-access-wcrlm\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.532077 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d5b266a-2fae-4634-8528-f14ebd42c2f2-service-ca\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.532098 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d5b266a-2fae-4634-8528-f14ebd42c2f2-console-serving-cert\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.532138 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d5b266a-2fae-4634-8528-f14ebd42c2f2-console-oauth-config\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.532230 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d5b266a-2fae-4634-8528-f14ebd42c2f2-console-config\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.533512 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d5b266a-2fae-4634-8528-f14ebd42c2f2-console-config\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.533516 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d5b266a-2fae-4634-8528-f14ebd42c2f2-service-ca\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.534378 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d5b266a-2fae-4634-8528-f14ebd42c2f2-trusted-ca-bundle\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.536148 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d5b266a-2fae-4634-8528-f14ebd42c2f2-oauth-serving-cert\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.538686 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d5b266a-2fae-4634-8528-f14ebd42c2f2-console-oauth-config\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.541037 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d5b266a-2fae-4634-8528-f14ebd42c2f2-console-serving-cert\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.557455 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcrlm\" (UniqueName: \"kubernetes.io/projected/1d5b266a-2fae-4634-8528-f14ebd42c2f2-kube-api-access-wcrlm\") pod \"console-5c7595d455-5t5mm\" (UID: \"1d5b266a-2fae-4634-8528-f14ebd42c2f2\") " pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.734847 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/59020701-49b7-412d-97a9-81ef6a905bb0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-nvbzs\" (UID: \"59020701-49b7-412d-97a9-81ef6a905bb0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.737978 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.739284 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/59020701-49b7-412d-97a9-81ef6a905bb0-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-nvbzs\" (UID: \"59020701-49b7-412d-97a9-81ef6a905bb0\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.819274 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-rh4rm"] Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.835904 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/67f83441-1412-437c-9cb1-c38ee7b70182-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-55mnf\" (UID: \"67f83441-1412-437c-9cb1-c38ee7b70182\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.841475 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/67f83441-1412-437c-9cb1-c38ee7b70182-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-55mnf\" (UID: \"67f83441-1412-437c-9cb1-c38ee7b70182\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.961358 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs" Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.985987 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c7595d455-5t5mm"] Jan 27 13:20:20 crc kubenswrapper[4786]: W0127 13:20:20.989341 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d5b266a_2fae_4634_8528_f14ebd42c2f2.slice/crio-ac108f641c74f4df85b6fe311985b44b9701821b600e204b5149f1c6e5461fe7 WatchSource:0}: Error finding container ac108f641c74f4df85b6fe311985b44b9701821b600e204b5149f1c6e5461fe7: Status 404 returned error can't find the container with id ac108f641c74f4df85b6fe311985b44b9701821b600e204b5149f1c6e5461fe7 Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.990145 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-cjq4w" event={"ID":"35fde08b-f604-431c-88a8-6fe254dc84aa","Type":"ContainerStarted","Data":"8c613c1500c0299b8c0beec75b5d4c045da7913518fed633534e9313b5398f42"} Jan 27 13:20:20 crc kubenswrapper[4786]: I0127 13:20:20.991495 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rh4rm" event={"ID":"19a395d7-a1ec-4c10-83ad-3f195b89fadd","Type":"ContainerStarted","Data":"ff5d12733d56520a72c50298dfe5aeb92836b2fd15cc718309df7c13e52b323c"} Jan 27 13:20:21 crc kubenswrapper[4786]: I0127 13:20:21.082051 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf" Jan 27 13:20:21 crc kubenswrapper[4786]: I0127 13:20:21.196011 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs"] Jan 27 13:20:21 crc kubenswrapper[4786]: W0127 13:20:21.205771 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59020701_49b7_412d_97a9_81ef6a905bb0.slice/crio-93d43b6829b8d64721cec77e3006801f5abc3417f081be0a1ad1fe2dd529f1cb WatchSource:0}: Error finding container 93d43b6829b8d64721cec77e3006801f5abc3417f081be0a1ad1fe2dd529f1cb: Status 404 returned error can't find the container with id 93d43b6829b8d64721cec77e3006801f5abc3417f081be0a1ad1fe2dd529f1cb Jan 27 13:20:21 crc kubenswrapper[4786]: I0127 13:20:21.292908 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf"] Jan 27 13:20:21 crc kubenswrapper[4786]: I0127 13:20:21.996916 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf" event={"ID":"67f83441-1412-437c-9cb1-c38ee7b70182","Type":"ContainerStarted","Data":"36d73bb9a387099983b7e9a5077c257125f981679565e2c5ecdfbdb2559e0bf9"} Jan 27 13:20:21 crc kubenswrapper[4786]: I0127 13:20:21.998679 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c7595d455-5t5mm" event={"ID":"1d5b266a-2fae-4634-8528-f14ebd42c2f2","Type":"ContainerStarted","Data":"101ee00ae6372a71215f54e7ae46b32e30ecb490f0f131cb53d2e122b2320bee"} Jan 27 13:20:21 crc kubenswrapper[4786]: I0127 13:20:21.998711 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c7595d455-5t5mm" event={"ID":"1d5b266a-2fae-4634-8528-f14ebd42c2f2","Type":"ContainerStarted","Data":"ac108f641c74f4df85b6fe311985b44b9701821b600e204b5149f1c6e5461fe7"} Jan 27 13:20:22 crc kubenswrapper[4786]: I0127 13:20:22.001531 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs" event={"ID":"59020701-49b7-412d-97a9-81ef6a905bb0","Type":"ContainerStarted","Data":"93d43b6829b8d64721cec77e3006801f5abc3417f081be0a1ad1fe2dd529f1cb"} Jan 27 13:20:22 crc kubenswrapper[4786]: I0127 13:20:22.021181 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5c7595d455-5t5mm" podStartSLOduration=2.02116558 podStartE2EDuration="2.02116558s" podCreationTimestamp="2026-01-27 13:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:20:22.017822669 +0000 UTC m=+805.228436788" watchObservedRunningTime="2026-01-27 13:20:22.02116558 +0000 UTC m=+805.231779699" Jan 27 13:20:22 crc kubenswrapper[4786]: I0127 13:20:22.739570 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2x4h6" Jan 27 13:20:25 crc kubenswrapper[4786]: I0127 13:20:25.032778 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs" event={"ID":"59020701-49b7-412d-97a9-81ef6a905bb0","Type":"ContainerStarted","Data":"27183b7cf86f1aa3add15c53f1614a1e181ebeca4980a543106dccecf2953265"} Jan 27 13:20:25 crc kubenswrapper[4786]: I0127 13:20:25.035555 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rh4rm" event={"ID":"19a395d7-a1ec-4c10-83ad-3f195b89fadd","Type":"ContainerStarted","Data":"cc31012ec8a5ca6baaa0e133874d89b3e3d95fd55a4873463a5f6fb4176d1c1f"} Jan 27 13:20:25 crc kubenswrapper[4786]: I0127 13:20:25.037219 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-cjq4w" event={"ID":"35fde08b-f604-431c-88a8-6fe254dc84aa","Type":"ContainerStarted","Data":"b1bf493ba83f59142739bad22d91f0c88344872f3c624e2cd20be900ddd69db6"} Jan 27 13:20:25 crc kubenswrapper[4786]: I0127 13:20:25.041349 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf" event={"ID":"67f83441-1412-437c-9cb1-c38ee7b70182","Type":"ContainerStarted","Data":"c602907bbdffcbb78a16981b468d0c1d261f2d5f18fef60f820c66d1c313d25a"} Jan 27 13:20:26 crc kubenswrapper[4786]: I0127 13:20:26.046240 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-cjq4w" Jan 27 13:20:26 crc kubenswrapper[4786]: I0127 13:20:26.046713 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs" Jan 27 13:20:26 crc kubenswrapper[4786]: I0127 13:20:26.062443 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-55mnf" podStartSLOduration=2.721563813 podStartE2EDuration="6.062411618s" podCreationTimestamp="2026-01-27 13:20:20 +0000 UTC" firstStartedPulling="2026-01-27 13:20:21.303024743 +0000 UTC m=+804.513638862" lastFinishedPulling="2026-01-27 13:20:24.643872548 +0000 UTC m=+807.854486667" observedRunningTime="2026-01-27 13:20:25.065319351 +0000 UTC m=+808.275933470" watchObservedRunningTime="2026-01-27 13:20:26.062411618 +0000 UTC m=+809.273025737" Jan 27 13:20:26 crc kubenswrapper[4786]: I0127 13:20:26.067110 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs" podStartSLOduration=2.630681907 podStartE2EDuration="6.067093576s" podCreationTimestamp="2026-01-27 13:20:20 +0000 UTC" firstStartedPulling="2026-01-27 13:20:21.2074904 +0000 UTC m=+804.418104519" lastFinishedPulling="2026-01-27 13:20:24.643902069 +0000 UTC m=+807.854516188" observedRunningTime="2026-01-27 13:20:26.061031471 +0000 UTC m=+809.271645590" watchObservedRunningTime="2026-01-27 13:20:26.067093576 +0000 UTC m=+809.277707695" Jan 27 13:20:26 crc kubenswrapper[4786]: I0127 13:20:26.084036 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-cjq4w" podStartSLOduration=1.865601912 podStartE2EDuration="6.084020447s" podCreationTimestamp="2026-01-27 13:20:20 +0000 UTC" firstStartedPulling="2026-01-27 13:20:20.43635064 +0000 UTC m=+803.646964759" lastFinishedPulling="2026-01-27 13:20:24.654769175 +0000 UTC m=+807.865383294" observedRunningTime="2026-01-27 13:20:26.08230233 +0000 UTC m=+809.292916459" watchObservedRunningTime="2026-01-27 13:20:26.084020447 +0000 UTC m=+809.294634566" Jan 27 13:20:29 crc kubenswrapper[4786]: I0127 13:20:29.066899 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-rh4rm" event={"ID":"19a395d7-a1ec-4c10-83ad-3f195b89fadd","Type":"ContainerStarted","Data":"dea53250764540687d26622be81ffaa6c966faf8b8e2680f290a9fb140338167"} Jan 27 13:20:29 crc kubenswrapper[4786]: I0127 13:20:29.081000 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-rh4rm" podStartSLOduration=1.22326452 podStartE2EDuration="9.080983333s" podCreationTimestamp="2026-01-27 13:20:20 +0000 UTC" firstStartedPulling="2026-01-27 13:20:20.843064521 +0000 UTC m=+804.053678640" lastFinishedPulling="2026-01-27 13:20:28.700783334 +0000 UTC m=+811.911397453" observedRunningTime="2026-01-27 13:20:29.08012914 +0000 UTC m=+812.290743259" watchObservedRunningTime="2026-01-27 13:20:29.080983333 +0000 UTC m=+812.291597452" Jan 27 13:20:30 crc kubenswrapper[4786]: I0127 13:20:30.401119 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-cjq4w" Jan 27 13:20:30 crc kubenswrapper[4786]: I0127 13:20:30.738965 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:30 crc kubenswrapper[4786]: I0127 13:20:30.739441 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:30 crc kubenswrapper[4786]: I0127 13:20:30.743925 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:31 crc kubenswrapper[4786]: I0127 13:20:31.080289 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5c7595d455-5t5mm" Jan 27 13:20:31 crc kubenswrapper[4786]: I0127 13:20:31.124181 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vwjp5"] Jan 27 13:20:40 crc kubenswrapper[4786]: I0127 13:20:40.969983 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nvbzs" Jan 27 13:20:54 crc kubenswrapper[4786]: I0127 13:20:54.224595 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb"] Jan 27 13:20:54 crc kubenswrapper[4786]: I0127 13:20:54.226314 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" Jan 27 13:20:54 crc kubenswrapper[4786]: I0127 13:20:54.229621 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 13:20:54 crc kubenswrapper[4786]: I0127 13:20:54.234031 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb"] Jan 27 13:20:54 crc kubenswrapper[4786]: I0127 13:20:54.288274 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2cb03dff-5d00-4c73-a067-16b5513602a9-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb\" (UID: \"2cb03dff-5d00-4c73-a067-16b5513602a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" Jan 27 13:20:54 crc kubenswrapper[4786]: I0127 13:20:54.288341 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2cb03dff-5d00-4c73-a067-16b5513602a9-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb\" (UID: \"2cb03dff-5d00-4c73-a067-16b5513602a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" Jan 27 13:20:54 crc kubenswrapper[4786]: I0127 13:20:54.288401 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t7vx\" (UniqueName: \"kubernetes.io/projected/2cb03dff-5d00-4c73-a067-16b5513602a9-kube-api-access-2t7vx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb\" (UID: \"2cb03dff-5d00-4c73-a067-16b5513602a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" Jan 27 13:20:54 crc kubenswrapper[4786]: I0127 13:20:54.389974 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2cb03dff-5d00-4c73-a067-16b5513602a9-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb\" (UID: \"2cb03dff-5d00-4c73-a067-16b5513602a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" Jan 27 13:20:54 crc kubenswrapper[4786]: I0127 13:20:54.390056 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2cb03dff-5d00-4c73-a067-16b5513602a9-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb\" (UID: \"2cb03dff-5d00-4c73-a067-16b5513602a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" Jan 27 13:20:54 crc kubenswrapper[4786]: I0127 13:20:54.390183 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t7vx\" (UniqueName: \"kubernetes.io/projected/2cb03dff-5d00-4c73-a067-16b5513602a9-kube-api-access-2t7vx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb\" (UID: \"2cb03dff-5d00-4c73-a067-16b5513602a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" Jan 27 13:20:54 crc kubenswrapper[4786]: I0127 13:20:54.390591 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2cb03dff-5d00-4c73-a067-16b5513602a9-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb\" (UID: \"2cb03dff-5d00-4c73-a067-16b5513602a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" Jan 27 13:20:54 crc kubenswrapper[4786]: I0127 13:20:54.390852 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2cb03dff-5d00-4c73-a067-16b5513602a9-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb\" (UID: \"2cb03dff-5d00-4c73-a067-16b5513602a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" Jan 27 13:20:54 crc kubenswrapper[4786]: I0127 13:20:54.410415 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t7vx\" (UniqueName: \"kubernetes.io/projected/2cb03dff-5d00-4c73-a067-16b5513602a9-kube-api-access-2t7vx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb\" (UID: \"2cb03dff-5d00-4c73-a067-16b5513602a9\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" Jan 27 13:20:54 crc kubenswrapper[4786]: I0127 13:20:54.547771 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" Jan 27 13:20:54 crc kubenswrapper[4786]: I0127 13:20:54.933073 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb"] Jan 27 13:20:55 crc kubenswrapper[4786]: I0127 13:20:55.260374 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" event={"ID":"2cb03dff-5d00-4c73-a067-16b5513602a9","Type":"ContainerStarted","Data":"b763d553ed3e3b8c9813e12fff96f774170922713c7ecd8e2acfbc74cdc15d12"} Jan 27 13:20:56 crc kubenswrapper[4786]: I0127 13:20:56.174509 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-vwjp5" podUID="47f5a0b2-7757-4795-901e-d175d64ebe67" containerName="console" containerID="cri-o://0fb53450c1bd061f1314b2f0eb86ed25eacddb0a222911ad2e8dc687a44bd735" gracePeriod=15 Jan 27 13:20:56 crc kubenswrapper[4786]: I0127 13:20:56.273340 4786 generic.go:334] "Generic (PLEG): container finished" podID="2cb03dff-5d00-4c73-a067-16b5513602a9" containerID="987cc0acf8990833ec1714c6498d42292c40b4d1a09ea8efb24690ad472e9afa" exitCode=0 Jan 27 13:20:56 crc kubenswrapper[4786]: I0127 13:20:56.273395 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" event={"ID":"2cb03dff-5d00-4c73-a067-16b5513602a9","Type":"ContainerDied","Data":"987cc0acf8990833ec1714c6498d42292c40b4d1a09ea8efb24690ad472e9afa"} Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.162232 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vwjp5_47f5a0b2-7757-4795-901e-d175d64ebe67/console/0.log" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.162571 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.279038 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vwjp5_47f5a0b2-7757-4795-901e-d175d64ebe67/console/0.log" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.279088 4786 generic.go:334] "Generic (PLEG): container finished" podID="47f5a0b2-7757-4795-901e-d175d64ebe67" containerID="0fb53450c1bd061f1314b2f0eb86ed25eacddb0a222911ad2e8dc687a44bd735" exitCode=2 Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.279117 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vwjp5" event={"ID":"47f5a0b2-7757-4795-901e-d175d64ebe67","Type":"ContainerDied","Data":"0fb53450c1bd061f1314b2f0eb86ed25eacddb0a222911ad2e8dc687a44bd735"} Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.279147 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vwjp5" event={"ID":"47f5a0b2-7757-4795-901e-d175d64ebe67","Type":"ContainerDied","Data":"6987c095acb536d7d04d579ec708fd38ddcae5dda6b12d8794fdfe05a2c1c0c0"} Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.279157 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vwjp5" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.279164 4786 scope.go:117] "RemoveContainer" containerID="0fb53450c1bd061f1314b2f0eb86ed25eacddb0a222911ad2e8dc687a44bd735" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.300362 4786 scope.go:117] "RemoveContainer" containerID="0fb53450c1bd061f1314b2f0eb86ed25eacddb0a222911ad2e8dc687a44bd735" Jan 27 13:20:57 crc kubenswrapper[4786]: E0127 13:20:57.300900 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fb53450c1bd061f1314b2f0eb86ed25eacddb0a222911ad2e8dc687a44bd735\": container with ID starting with 0fb53450c1bd061f1314b2f0eb86ed25eacddb0a222911ad2e8dc687a44bd735 not found: ID does not exist" containerID="0fb53450c1bd061f1314b2f0eb86ed25eacddb0a222911ad2e8dc687a44bd735" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.300950 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fb53450c1bd061f1314b2f0eb86ed25eacddb0a222911ad2e8dc687a44bd735"} err="failed to get container status \"0fb53450c1bd061f1314b2f0eb86ed25eacddb0a222911ad2e8dc687a44bd735\": rpc error: code = NotFound desc = could not find container \"0fb53450c1bd061f1314b2f0eb86ed25eacddb0a222911ad2e8dc687a44bd735\": container with ID starting with 0fb53450c1bd061f1314b2f0eb86ed25eacddb0a222911ad2e8dc687a44bd735 not found: ID does not exist" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.322746 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjfrl\" (UniqueName: \"kubernetes.io/projected/47f5a0b2-7757-4795-901e-d175d64ebe67-kube-api-access-zjfrl\") pod \"47f5a0b2-7757-4795-901e-d175d64ebe67\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.322798 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/47f5a0b2-7757-4795-901e-d175d64ebe67-console-oauth-config\") pod \"47f5a0b2-7757-4795-901e-d175d64ebe67\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.322834 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-console-config\") pod \"47f5a0b2-7757-4795-901e-d175d64ebe67\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.322909 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-trusted-ca-bundle\") pod \"47f5a0b2-7757-4795-901e-d175d64ebe67\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.322963 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-oauth-serving-cert\") pod \"47f5a0b2-7757-4795-901e-d175d64ebe67\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.322998 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/47f5a0b2-7757-4795-901e-d175d64ebe67-console-serving-cert\") pod \"47f5a0b2-7757-4795-901e-d175d64ebe67\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.323042 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-service-ca\") pod \"47f5a0b2-7757-4795-901e-d175d64ebe67\" (UID: \"47f5a0b2-7757-4795-901e-d175d64ebe67\") " Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.323748 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "47f5a0b2-7757-4795-901e-d175d64ebe67" (UID: "47f5a0b2-7757-4795-901e-d175d64ebe67"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.324116 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-console-config" (OuterVolumeSpecName: "console-config") pod "47f5a0b2-7757-4795-901e-d175d64ebe67" (UID: "47f5a0b2-7757-4795-901e-d175d64ebe67"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.323759 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "47f5a0b2-7757-4795-901e-d175d64ebe67" (UID: "47f5a0b2-7757-4795-901e-d175d64ebe67"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.323804 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-service-ca" (OuterVolumeSpecName: "service-ca") pod "47f5a0b2-7757-4795-901e-d175d64ebe67" (UID: "47f5a0b2-7757-4795-901e-d175d64ebe67"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.328799 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f5a0b2-7757-4795-901e-d175d64ebe67-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "47f5a0b2-7757-4795-901e-d175d64ebe67" (UID: "47f5a0b2-7757-4795-901e-d175d64ebe67"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.328840 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47f5a0b2-7757-4795-901e-d175d64ebe67-kube-api-access-zjfrl" (OuterVolumeSpecName: "kube-api-access-zjfrl") pod "47f5a0b2-7757-4795-901e-d175d64ebe67" (UID: "47f5a0b2-7757-4795-901e-d175d64ebe67"). InnerVolumeSpecName "kube-api-access-zjfrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.328916 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f5a0b2-7757-4795-901e-d175d64ebe67-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "47f5a0b2-7757-4795-901e-d175d64ebe67" (UID: "47f5a0b2-7757-4795-901e-d175d64ebe67"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.426126 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjfrl\" (UniqueName: \"kubernetes.io/projected/47f5a0b2-7757-4795-901e-d175d64ebe67-kube-api-access-zjfrl\") on node \"crc\" DevicePath \"\"" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.426178 4786 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/47f5a0b2-7757-4795-901e-d175d64ebe67-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.426190 4786 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.426202 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.426220 4786 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.426231 4786 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/47f5a0b2-7757-4795-901e-d175d64ebe67-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.426242 4786 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/47f5a0b2-7757-4795-901e-d175d64ebe67-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.598656 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vwjp5"] Jan 27 13:20:57 crc kubenswrapper[4786]: I0127 13:20:57.601887 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-vwjp5"] Jan 27 13:20:59 crc kubenswrapper[4786]: I0127 13:20:59.473836 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47f5a0b2-7757-4795-901e-d175d64ebe67" path="/var/lib/kubelet/pods/47f5a0b2-7757-4795-901e-d175d64ebe67/volumes" Jan 27 13:21:02 crc kubenswrapper[4786]: I0127 13:21:02.330522 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" event={"ID":"2cb03dff-5d00-4c73-a067-16b5513602a9","Type":"ContainerStarted","Data":"d783849f838beae27128254de990cce063fcd081077b9f36f5a73a9a58c1bc2d"} Jan 27 13:21:03 crc kubenswrapper[4786]: I0127 13:21:03.337720 4786 generic.go:334] "Generic (PLEG): container finished" podID="2cb03dff-5d00-4c73-a067-16b5513602a9" containerID="d783849f838beae27128254de990cce063fcd081077b9f36f5a73a9a58c1bc2d" exitCode=0 Jan 27 13:21:03 crc kubenswrapper[4786]: I0127 13:21:03.337773 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" event={"ID":"2cb03dff-5d00-4c73-a067-16b5513602a9","Type":"ContainerDied","Data":"d783849f838beae27128254de990cce063fcd081077b9f36f5a73a9a58c1bc2d"} Jan 27 13:21:04 crc kubenswrapper[4786]: I0127 13:21:04.345743 4786 generic.go:334] "Generic (PLEG): container finished" podID="2cb03dff-5d00-4c73-a067-16b5513602a9" containerID="7fd8a2d26c2a27d30497484b086083bf573b626813454d77ea024b493124e763" exitCode=0 Jan 27 13:21:04 crc kubenswrapper[4786]: I0127 13:21:04.345844 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" event={"ID":"2cb03dff-5d00-4c73-a067-16b5513602a9","Type":"ContainerDied","Data":"7fd8a2d26c2a27d30497484b086083bf573b626813454d77ea024b493124e763"} Jan 27 13:21:05 crc kubenswrapper[4786]: I0127 13:21:05.570578 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" Jan 27 13:21:05 crc kubenswrapper[4786]: I0127 13:21:05.664485 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2cb03dff-5d00-4c73-a067-16b5513602a9-util\") pod \"2cb03dff-5d00-4c73-a067-16b5513602a9\" (UID: \"2cb03dff-5d00-4c73-a067-16b5513602a9\") " Jan 27 13:21:05 crc kubenswrapper[4786]: I0127 13:21:05.664552 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2cb03dff-5d00-4c73-a067-16b5513602a9-bundle\") pod \"2cb03dff-5d00-4c73-a067-16b5513602a9\" (UID: \"2cb03dff-5d00-4c73-a067-16b5513602a9\") " Jan 27 13:21:05 crc kubenswrapper[4786]: I0127 13:21:05.664594 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t7vx\" (UniqueName: \"kubernetes.io/projected/2cb03dff-5d00-4c73-a067-16b5513602a9-kube-api-access-2t7vx\") pod \"2cb03dff-5d00-4c73-a067-16b5513602a9\" (UID: \"2cb03dff-5d00-4c73-a067-16b5513602a9\") " Jan 27 13:21:05 crc kubenswrapper[4786]: I0127 13:21:05.665511 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cb03dff-5d00-4c73-a067-16b5513602a9-bundle" (OuterVolumeSpecName: "bundle") pod "2cb03dff-5d00-4c73-a067-16b5513602a9" (UID: "2cb03dff-5d00-4c73-a067-16b5513602a9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:21:05 crc kubenswrapper[4786]: I0127 13:21:05.670215 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb03dff-5d00-4c73-a067-16b5513602a9-kube-api-access-2t7vx" (OuterVolumeSpecName: "kube-api-access-2t7vx") pod "2cb03dff-5d00-4c73-a067-16b5513602a9" (UID: "2cb03dff-5d00-4c73-a067-16b5513602a9"). InnerVolumeSpecName "kube-api-access-2t7vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:21:05 crc kubenswrapper[4786]: I0127 13:21:05.674416 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cb03dff-5d00-4c73-a067-16b5513602a9-util" (OuterVolumeSpecName: "util") pod "2cb03dff-5d00-4c73-a067-16b5513602a9" (UID: "2cb03dff-5d00-4c73-a067-16b5513602a9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:21:05 crc kubenswrapper[4786]: I0127 13:21:05.765979 4786 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2cb03dff-5d00-4c73-a067-16b5513602a9-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:21:05 crc kubenswrapper[4786]: I0127 13:21:05.766024 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t7vx\" (UniqueName: \"kubernetes.io/projected/2cb03dff-5d00-4c73-a067-16b5513602a9-kube-api-access-2t7vx\") on node \"crc\" DevicePath \"\"" Jan 27 13:21:05 crc kubenswrapper[4786]: I0127 13:21:05.766035 4786 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2cb03dff-5d00-4c73-a067-16b5513602a9-util\") on node \"crc\" DevicePath \"\"" Jan 27 13:21:06 crc kubenswrapper[4786]: I0127 13:21:06.359160 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" event={"ID":"2cb03dff-5d00-4c73-a067-16b5513602a9","Type":"ContainerDied","Data":"b763d553ed3e3b8c9813e12fff96f774170922713c7ecd8e2acfbc74cdc15d12"} Jan 27 13:21:06 crc kubenswrapper[4786]: I0127 13:21:06.359206 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b763d553ed3e3b8c9813e12fff96f774170922713c7ecd8e2acfbc74cdc15d12" Jan 27 13:21:06 crc kubenswrapper[4786]: I0127 13:21:06.359214 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.352158 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph"] Jan 27 13:21:17 crc kubenswrapper[4786]: E0127 13:21:17.353085 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb03dff-5d00-4c73-a067-16b5513602a9" containerName="util" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.353102 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb03dff-5d00-4c73-a067-16b5513602a9" containerName="util" Jan 27 13:21:17 crc kubenswrapper[4786]: E0127 13:21:17.353114 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb03dff-5d00-4c73-a067-16b5513602a9" containerName="pull" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.353122 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb03dff-5d00-4c73-a067-16b5513602a9" containerName="pull" Jan 27 13:21:17 crc kubenswrapper[4786]: E0127 13:21:17.353134 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb03dff-5d00-4c73-a067-16b5513602a9" containerName="extract" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.353141 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb03dff-5d00-4c73-a067-16b5513602a9" containerName="extract" Jan 27 13:21:17 crc kubenswrapper[4786]: E0127 13:21:17.353155 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47f5a0b2-7757-4795-901e-d175d64ebe67" containerName="console" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.353162 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="47f5a0b2-7757-4795-901e-d175d64ebe67" containerName="console" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.353287 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="47f5a0b2-7757-4795-901e-d175d64ebe67" containerName="console" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.353301 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb03dff-5d00-4c73-a067-16b5513602a9" containerName="extract" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.353788 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.357098 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.357409 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.357642 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-tvllk" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.358576 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.358800 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.376352 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph"] Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.501067 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f481967-0c63-4c37-9daf-5da5dd5508fd-apiservice-cert\") pod \"metallb-operator-controller-manager-544859b7c7-wsbph\" (UID: \"0f481967-0c63-4c37-9daf-5da5dd5508fd\") " pod="metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.501395 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h745\" (UniqueName: \"kubernetes.io/projected/0f481967-0c63-4c37-9daf-5da5dd5508fd-kube-api-access-5h745\") pod \"metallb-operator-controller-manager-544859b7c7-wsbph\" (UID: \"0f481967-0c63-4c37-9daf-5da5dd5508fd\") " pod="metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.501419 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f481967-0c63-4c37-9daf-5da5dd5508fd-webhook-cert\") pod \"metallb-operator-controller-manager-544859b7c7-wsbph\" (UID: \"0f481967-0c63-4c37-9daf-5da5dd5508fd\") " pod="metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.599784 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v"] Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.600438 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.602035 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h745\" (UniqueName: \"kubernetes.io/projected/0f481967-0c63-4c37-9daf-5da5dd5508fd-kube-api-access-5h745\") pod \"metallb-operator-controller-manager-544859b7c7-wsbph\" (UID: \"0f481967-0c63-4c37-9daf-5da5dd5508fd\") " pod="metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.602078 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f481967-0c63-4c37-9daf-5da5dd5508fd-webhook-cert\") pod \"metallb-operator-controller-manager-544859b7c7-wsbph\" (UID: \"0f481967-0c63-4c37-9daf-5da5dd5508fd\") " pod="metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.602147 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f481967-0c63-4c37-9daf-5da5dd5508fd-apiservice-cert\") pod \"metallb-operator-controller-manager-544859b7c7-wsbph\" (UID: \"0f481967-0c63-4c37-9daf-5da5dd5508fd\") " pod="metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.602660 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-lzp6h" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.602905 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.603085 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.609742 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f481967-0c63-4c37-9daf-5da5dd5508fd-webhook-cert\") pod \"metallb-operator-controller-manager-544859b7c7-wsbph\" (UID: \"0f481967-0c63-4c37-9daf-5da5dd5508fd\") " pod="metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.612386 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f481967-0c63-4c37-9daf-5da5dd5508fd-apiservice-cert\") pod \"metallb-operator-controller-manager-544859b7c7-wsbph\" (UID: \"0f481967-0c63-4c37-9daf-5da5dd5508fd\") " pod="metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.619141 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v"] Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.631118 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h745\" (UniqueName: \"kubernetes.io/projected/0f481967-0c63-4c37-9daf-5da5dd5508fd-kube-api-access-5h745\") pod \"metallb-operator-controller-manager-544859b7c7-wsbph\" (UID: \"0f481967-0c63-4c37-9daf-5da5dd5508fd\") " pod="metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.704506 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84b09bb0-a364-469e-9302-c3582b359791-apiservice-cert\") pod \"metallb-operator-webhook-server-79f6764f84-kjh6v\" (UID: \"84b09bb0-a364-469e-9302-c3582b359791\") " pod="metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.704552 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g44fd\" (UniqueName: \"kubernetes.io/projected/84b09bb0-a364-469e-9302-c3582b359791-kube-api-access-g44fd\") pod \"metallb-operator-webhook-server-79f6764f84-kjh6v\" (UID: \"84b09bb0-a364-469e-9302-c3582b359791\") " pod="metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.704639 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84b09bb0-a364-469e-9302-c3582b359791-webhook-cert\") pod \"metallb-operator-webhook-server-79f6764f84-kjh6v\" (UID: \"84b09bb0-a364-469e-9302-c3582b359791\") " pod="metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.718168 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.805288 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84b09bb0-a364-469e-9302-c3582b359791-apiservice-cert\") pod \"metallb-operator-webhook-server-79f6764f84-kjh6v\" (UID: \"84b09bb0-a364-469e-9302-c3582b359791\") " pod="metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.805321 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g44fd\" (UniqueName: \"kubernetes.io/projected/84b09bb0-a364-469e-9302-c3582b359791-kube-api-access-g44fd\") pod \"metallb-operator-webhook-server-79f6764f84-kjh6v\" (UID: \"84b09bb0-a364-469e-9302-c3582b359791\") " pod="metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.805359 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84b09bb0-a364-469e-9302-c3582b359791-webhook-cert\") pod \"metallb-operator-webhook-server-79f6764f84-kjh6v\" (UID: \"84b09bb0-a364-469e-9302-c3582b359791\") " pod="metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.812116 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84b09bb0-a364-469e-9302-c3582b359791-apiservice-cert\") pod \"metallb-operator-webhook-server-79f6764f84-kjh6v\" (UID: \"84b09bb0-a364-469e-9302-c3582b359791\") " pod="metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.827990 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g44fd\" (UniqueName: \"kubernetes.io/projected/84b09bb0-a364-469e-9302-c3582b359791-kube-api-access-g44fd\") pod \"metallb-operator-webhook-server-79f6764f84-kjh6v\" (UID: \"84b09bb0-a364-469e-9302-c3582b359791\") " pod="metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.831182 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84b09bb0-a364-469e-9302-c3582b359791-webhook-cert\") pod \"metallb-operator-webhook-server-79f6764f84-kjh6v\" (UID: \"84b09bb0-a364-469e-9302-c3582b359791\") " pod="metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v" Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.944741 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph"] Jan 27 13:21:17 crc kubenswrapper[4786]: W0127 13:21:17.953044 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f481967_0c63_4c37_9daf_5da5dd5508fd.slice/crio-16f35e4d1b2a451451c97886d5a29daccac2f4c38ed1a011a9d8fefea135f437 WatchSource:0}: Error finding container 16f35e4d1b2a451451c97886d5a29daccac2f4c38ed1a011a9d8fefea135f437: Status 404 returned error can't find the container with id 16f35e4d1b2a451451c97886d5a29daccac2f4c38ed1a011a9d8fefea135f437 Jan 27 13:21:17 crc kubenswrapper[4786]: I0127 13:21:17.978539 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v" Jan 27 13:21:18 crc kubenswrapper[4786]: I0127 13:21:18.239725 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v"] Jan 27 13:21:18 crc kubenswrapper[4786]: W0127 13:21:18.248274 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84b09bb0_a364_469e_9302_c3582b359791.slice/crio-264859eed7e584d6e130c3e462f7bc57a712095c14ed1ab1f6f266d254bb8725 WatchSource:0}: Error finding container 264859eed7e584d6e130c3e462f7bc57a712095c14ed1ab1f6f266d254bb8725: Status 404 returned error can't find the container with id 264859eed7e584d6e130c3e462f7bc57a712095c14ed1ab1f6f266d254bb8725 Jan 27 13:21:18 crc kubenswrapper[4786]: I0127 13:21:18.426045 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph" event={"ID":"0f481967-0c63-4c37-9daf-5da5dd5508fd","Type":"ContainerStarted","Data":"16f35e4d1b2a451451c97886d5a29daccac2f4c38ed1a011a9d8fefea135f437"} Jan 27 13:21:18 crc kubenswrapper[4786]: I0127 13:21:18.427235 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v" event={"ID":"84b09bb0-a364-469e-9302-c3582b359791","Type":"ContainerStarted","Data":"264859eed7e584d6e130c3e462f7bc57a712095c14ed1ab1f6f266d254bb8725"} Jan 27 13:21:25 crc kubenswrapper[4786]: I0127 13:21:25.476167 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph" event={"ID":"0f481967-0c63-4c37-9daf-5da5dd5508fd","Type":"ContainerStarted","Data":"93a3456cd5eda5bd0fbed1d092ba2f9a5b2bbd8ab1b15a2a67c08d4b1953f0fb"} Jan 27 13:21:26 crc kubenswrapper[4786]: I0127 13:21:26.483096 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v" event={"ID":"84b09bb0-a364-469e-9302-c3582b359791","Type":"ContainerStarted","Data":"8cac3825859416e56ffb7a4c49c7f767a02004651e6755c64c7a3f17e5ddb3d5"} Jan 27 13:21:26 crc kubenswrapper[4786]: I0127 13:21:26.483533 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph" Jan 27 13:21:26 crc kubenswrapper[4786]: I0127 13:21:26.506683 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph" podStartSLOduration=2.329943221 podStartE2EDuration="9.506661139s" podCreationTimestamp="2026-01-27 13:21:17 +0000 UTC" firstStartedPulling="2026-01-27 13:21:17.958586183 +0000 UTC m=+861.169200302" lastFinishedPulling="2026-01-27 13:21:25.135304101 +0000 UTC m=+868.345918220" observedRunningTime="2026-01-27 13:21:26.5019695 +0000 UTC m=+869.712583619" watchObservedRunningTime="2026-01-27 13:21:26.506661139 +0000 UTC m=+869.717275268" Jan 27 13:21:26 crc kubenswrapper[4786]: I0127 13:21:26.522792 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v" podStartSLOduration=2.625961638 podStartE2EDuration="9.522762029s" podCreationTimestamp="2026-01-27 13:21:17 +0000 UTC" firstStartedPulling="2026-01-27 13:21:18.255416533 +0000 UTC m=+861.466030652" lastFinishedPulling="2026-01-27 13:21:25.152216924 +0000 UTC m=+868.362831043" observedRunningTime="2026-01-27 13:21:26.521951236 +0000 UTC m=+869.732565355" watchObservedRunningTime="2026-01-27 13:21:26.522762029 +0000 UTC m=+869.733376148" Jan 27 13:21:27 crc kubenswrapper[4786]: I0127 13:21:27.489558 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v" Jan 27 13:21:37 crc kubenswrapper[4786]: I0127 13:21:37.983489 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-79f6764f84-kjh6v" Jan 27 13:21:57 crc kubenswrapper[4786]: I0127 13:21:57.721465 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-544859b7c7-wsbph" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.394471 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-f8lw7"] Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.395118 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f8lw7" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.397555 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.397839 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2vr46" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.411493 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-f8lw7"] Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.429698 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hwqdq"] Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.435031 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.446570 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.446899 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.499083 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-92hjt"] Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.500217 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-92hjt" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.501881 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-jpssv" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.503094 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.503109 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.503270 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.511395 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-qsx4x"] Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.512240 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-qsx4x" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.515652 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.535944 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfglv\" (UniqueName: \"kubernetes.io/projected/0ffbeb23-d6ac-4a28-81a5-052f7c2c8618-kube-api-access-mfglv\") pod \"frr-k8s-webhook-server-7df86c4f6c-f8lw7\" (UID: \"0ffbeb23-d6ac-4a28-81a5-052f7c2c8618\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f8lw7" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.535993 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0a3d249e-e994-4e5d-9970-04c4977f28c9-frr-sockets\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.536039 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0a3d249e-e994-4e5d-9970-04c4977f28c9-metrics\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.536076 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0a3d249e-e994-4e5d-9970-04c4977f28c9-frr-startup\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.536120 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0a3d249e-e994-4e5d-9970-04c4977f28c9-reloader\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.536158 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ffbeb23-d6ac-4a28-81a5-052f7c2c8618-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-f8lw7\" (UID: \"0ffbeb23-d6ac-4a28-81a5-052f7c2c8618\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f8lw7" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.536210 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0a3d249e-e994-4e5d-9970-04c4977f28c9-frr-conf\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.536237 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a3d249e-e994-4e5d-9970-04c4977f28c9-metrics-certs\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.536288 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t98n5\" (UniqueName: \"kubernetes.io/projected/0a3d249e-e994-4e5d-9970-04c4977f28c9-kube-api-access-t98n5\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.538524 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-qsx4x"] Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.637871 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp9jx\" (UniqueName: \"kubernetes.io/projected/fb3825b5-f83e-4064-ac93-19ee9e441b42-kube-api-access-qp9jx\") pod \"controller-6968d8fdc4-qsx4x\" (UID: \"fb3825b5-f83e-4064-ac93-19ee9e441b42\") " pod="metallb-system/controller-6968d8fdc4-qsx4x" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.637919 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfglv\" (UniqueName: \"kubernetes.io/projected/0ffbeb23-d6ac-4a28-81a5-052f7c2c8618-kube-api-access-mfglv\") pod \"frr-k8s-webhook-server-7df86c4f6c-f8lw7\" (UID: \"0ffbeb23-d6ac-4a28-81a5-052f7c2c8618\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f8lw7" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.637946 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0a3d249e-e994-4e5d-9970-04c4977f28c9-frr-sockets\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.637966 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-metallb-excludel2\") pod \"speaker-92hjt\" (UID: \"987590ac-66b7-4ab0-8c6b-b72bbd04bab2\") " pod="metallb-system/speaker-92hjt" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.637983 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0a3d249e-e994-4e5d-9970-04c4977f28c9-metrics\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.637999 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whpbm\" (UniqueName: \"kubernetes.io/projected/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-kube-api-access-whpbm\") pod \"speaker-92hjt\" (UID: \"987590ac-66b7-4ab0-8c6b-b72bbd04bab2\") " pod="metallb-system/speaker-92hjt" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.638025 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-memberlist\") pod \"speaker-92hjt\" (UID: \"987590ac-66b7-4ab0-8c6b-b72bbd04bab2\") " pod="metallb-system/speaker-92hjt" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.638065 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fb3825b5-f83e-4064-ac93-19ee9e441b42-cert\") pod \"controller-6968d8fdc4-qsx4x\" (UID: \"fb3825b5-f83e-4064-ac93-19ee9e441b42\") " pod="metallb-system/controller-6968d8fdc4-qsx4x" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.638081 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0a3d249e-e994-4e5d-9970-04c4977f28c9-frr-startup\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.638098 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0a3d249e-e994-4e5d-9970-04c4977f28c9-reloader\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.638118 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-metrics-certs\") pod \"speaker-92hjt\" (UID: \"987590ac-66b7-4ab0-8c6b-b72bbd04bab2\") " pod="metallb-system/speaker-92hjt" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.638178 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ffbeb23-d6ac-4a28-81a5-052f7c2c8618-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-f8lw7\" (UID: \"0ffbeb23-d6ac-4a28-81a5-052f7c2c8618\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f8lw7" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.638198 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0a3d249e-e994-4e5d-9970-04c4977f28c9-frr-conf\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.638219 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb3825b5-f83e-4064-ac93-19ee9e441b42-metrics-certs\") pod \"controller-6968d8fdc4-qsx4x\" (UID: \"fb3825b5-f83e-4064-ac93-19ee9e441b42\") " pod="metallb-system/controller-6968d8fdc4-qsx4x" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.638235 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a3d249e-e994-4e5d-9970-04c4977f28c9-metrics-certs\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.638258 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t98n5\" (UniqueName: \"kubernetes.io/projected/0a3d249e-e994-4e5d-9970-04c4977f28c9-kube-api-access-t98n5\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.639032 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0a3d249e-e994-4e5d-9970-04c4977f28c9-frr-sockets\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.639075 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0a3d249e-e994-4e5d-9970-04c4977f28c9-reloader\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.639845 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0a3d249e-e994-4e5d-9970-04c4977f28c9-frr-startup\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.639919 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0a3d249e-e994-4e5d-9970-04c4977f28c9-frr-conf\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.640108 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0a3d249e-e994-4e5d-9970-04c4977f28c9-metrics\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.647254 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a3d249e-e994-4e5d-9970-04c4977f28c9-metrics-certs\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.654189 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ffbeb23-d6ac-4a28-81a5-052f7c2c8618-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-f8lw7\" (UID: \"0ffbeb23-d6ac-4a28-81a5-052f7c2c8618\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f8lw7" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.659396 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t98n5\" (UniqueName: \"kubernetes.io/projected/0a3d249e-e994-4e5d-9970-04c4977f28c9-kube-api-access-t98n5\") pod \"frr-k8s-hwqdq\" (UID: \"0a3d249e-e994-4e5d-9970-04c4977f28c9\") " pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.674685 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfglv\" (UniqueName: \"kubernetes.io/projected/0ffbeb23-d6ac-4a28-81a5-052f7c2c8618-kube-api-access-mfglv\") pod \"frr-k8s-webhook-server-7df86c4f6c-f8lw7\" (UID: \"0ffbeb23-d6ac-4a28-81a5-052f7c2c8618\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f8lw7" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.729853 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f8lw7" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.738953 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb3825b5-f83e-4064-ac93-19ee9e441b42-metrics-certs\") pod \"controller-6968d8fdc4-qsx4x\" (UID: \"fb3825b5-f83e-4064-ac93-19ee9e441b42\") " pod="metallb-system/controller-6968d8fdc4-qsx4x" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.739035 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp9jx\" (UniqueName: \"kubernetes.io/projected/fb3825b5-f83e-4064-ac93-19ee9e441b42-kube-api-access-qp9jx\") pod \"controller-6968d8fdc4-qsx4x\" (UID: \"fb3825b5-f83e-4064-ac93-19ee9e441b42\") " pod="metallb-system/controller-6968d8fdc4-qsx4x" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.739072 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-metallb-excludel2\") pod \"speaker-92hjt\" (UID: \"987590ac-66b7-4ab0-8c6b-b72bbd04bab2\") " pod="metallb-system/speaker-92hjt" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.739094 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whpbm\" (UniqueName: \"kubernetes.io/projected/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-kube-api-access-whpbm\") pod \"speaker-92hjt\" (UID: \"987590ac-66b7-4ab0-8c6b-b72bbd04bab2\") " pod="metallb-system/speaker-92hjt" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.739125 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-memberlist\") pod \"speaker-92hjt\" (UID: \"987590ac-66b7-4ab0-8c6b-b72bbd04bab2\") " pod="metallb-system/speaker-92hjt" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.739148 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fb3825b5-f83e-4064-ac93-19ee9e441b42-cert\") pod \"controller-6968d8fdc4-qsx4x\" (UID: \"fb3825b5-f83e-4064-ac93-19ee9e441b42\") " pod="metallb-system/controller-6968d8fdc4-qsx4x" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.739176 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-metrics-certs\") pod \"speaker-92hjt\" (UID: \"987590ac-66b7-4ab0-8c6b-b72bbd04bab2\") " pod="metallb-system/speaker-92hjt" Jan 27 13:21:58 crc kubenswrapper[4786]: E0127 13:21:58.739311 4786 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 27 13:21:58 crc kubenswrapper[4786]: E0127 13:21:58.739371 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-metrics-certs podName:987590ac-66b7-4ab0-8c6b-b72bbd04bab2 nodeName:}" failed. No retries permitted until 2026-01-27 13:21:59.239350042 +0000 UTC m=+902.449964161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-metrics-certs") pod "speaker-92hjt" (UID: "987590ac-66b7-4ab0-8c6b-b72bbd04bab2") : secret "speaker-certs-secret" not found Jan 27 13:21:58 crc kubenswrapper[4786]: E0127 13:21:58.739802 4786 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 13:21:58 crc kubenswrapper[4786]: E0127 13:21:58.739832 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-memberlist podName:987590ac-66b7-4ab0-8c6b-b72bbd04bab2 nodeName:}" failed. No retries permitted until 2026-01-27 13:21:59.239824725 +0000 UTC m=+902.450438844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-memberlist") pod "speaker-92hjt" (UID: "987590ac-66b7-4ab0-8c6b-b72bbd04bab2") : secret "metallb-memberlist" not found Jan 27 13:21:58 crc kubenswrapper[4786]: E0127 13:21:58.739950 4786 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 27 13:21:58 crc kubenswrapper[4786]: E0127 13:21:58.739987 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb3825b5-f83e-4064-ac93-19ee9e441b42-metrics-certs podName:fb3825b5-f83e-4064-ac93-19ee9e441b42 nodeName:}" failed. No retries permitted until 2026-01-27 13:21:59.239977739 +0000 UTC m=+902.450591858 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb3825b5-f83e-4064-ac93-19ee9e441b42-metrics-certs") pod "controller-6968d8fdc4-qsx4x" (UID: "fb3825b5-f83e-4064-ac93-19ee9e441b42") : secret "controller-certs-secret" not found Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.740050 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-metallb-excludel2\") pod \"speaker-92hjt\" (UID: \"987590ac-66b7-4ab0-8c6b-b72bbd04bab2\") " pod="metallb-system/speaker-92hjt" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.742211 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.753897 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fb3825b5-f83e-4064-ac93-19ee9e441b42-cert\") pod \"controller-6968d8fdc4-qsx4x\" (UID: \"fb3825b5-f83e-4064-ac93-19ee9e441b42\") " pod="metallb-system/controller-6968d8fdc4-qsx4x" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.762499 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp9jx\" (UniqueName: \"kubernetes.io/projected/fb3825b5-f83e-4064-ac93-19ee9e441b42-kube-api-access-qp9jx\") pod \"controller-6968d8fdc4-qsx4x\" (UID: \"fb3825b5-f83e-4064-ac93-19ee9e441b42\") " pod="metallb-system/controller-6968d8fdc4-qsx4x" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.763042 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whpbm\" (UniqueName: \"kubernetes.io/projected/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-kube-api-access-whpbm\") pod \"speaker-92hjt\" (UID: \"987590ac-66b7-4ab0-8c6b-b72bbd04bab2\") " pod="metallb-system/speaker-92hjt" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.765128 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:21:58 crc kubenswrapper[4786]: I0127 13:21:58.958230 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-f8lw7"] Jan 27 13:21:58 crc kubenswrapper[4786]: W0127 13:21:58.967573 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ffbeb23_d6ac_4a28_81a5_052f7c2c8618.slice/crio-c26cca417081396a35e6ee42421fd58df36e8cdc331064ae0e062c2518563e2e WatchSource:0}: Error finding container c26cca417081396a35e6ee42421fd58df36e8cdc331064ae0e062c2518563e2e: Status 404 returned error can't find the container with id c26cca417081396a35e6ee42421fd58df36e8cdc331064ae0e062c2518563e2e Jan 27 13:21:59 crc kubenswrapper[4786]: I0127 13:21:59.247690 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-memberlist\") pod \"speaker-92hjt\" (UID: \"987590ac-66b7-4ab0-8c6b-b72bbd04bab2\") " pod="metallb-system/speaker-92hjt" Jan 27 13:21:59 crc kubenswrapper[4786]: I0127 13:21:59.247769 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-metrics-certs\") pod \"speaker-92hjt\" (UID: \"987590ac-66b7-4ab0-8c6b-b72bbd04bab2\") " pod="metallb-system/speaker-92hjt" Jan 27 13:21:59 crc kubenswrapper[4786]: I0127 13:21:59.247823 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb3825b5-f83e-4064-ac93-19ee9e441b42-metrics-certs\") pod \"controller-6968d8fdc4-qsx4x\" (UID: \"fb3825b5-f83e-4064-ac93-19ee9e441b42\") " pod="metallb-system/controller-6968d8fdc4-qsx4x" Jan 27 13:21:59 crc kubenswrapper[4786]: E0127 13:21:59.247837 4786 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 13:21:59 crc kubenswrapper[4786]: E0127 13:21:59.247936 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-memberlist podName:987590ac-66b7-4ab0-8c6b-b72bbd04bab2 nodeName:}" failed. No retries permitted until 2026-01-27 13:22:00.247912606 +0000 UTC m=+903.458526785 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-memberlist") pod "speaker-92hjt" (UID: "987590ac-66b7-4ab0-8c6b-b72bbd04bab2") : secret "metallb-memberlist" not found Jan 27 13:21:59 crc kubenswrapper[4786]: I0127 13:21:59.253093 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-metrics-certs\") pod \"speaker-92hjt\" (UID: \"987590ac-66b7-4ab0-8c6b-b72bbd04bab2\") " pod="metallb-system/speaker-92hjt" Jan 27 13:21:59 crc kubenswrapper[4786]: I0127 13:21:59.253184 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb3825b5-f83e-4064-ac93-19ee9e441b42-metrics-certs\") pod \"controller-6968d8fdc4-qsx4x\" (UID: \"fb3825b5-f83e-4064-ac93-19ee9e441b42\") " pod="metallb-system/controller-6968d8fdc4-qsx4x" Jan 27 13:21:59 crc kubenswrapper[4786]: I0127 13:21:59.426347 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-qsx4x" Jan 27 13:22:01 crc kubenswrapper[4786]: I0127 13:21:59.605946 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-qsx4x"] Jan 27 13:22:01 crc kubenswrapper[4786]: W0127 13:21:59.610219 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb3825b5_f83e_4064_ac93_19ee9e441b42.slice/crio-4ae0e1de3412c796d3d68a2829eb8d10db67d7d283c875085dc601dba5beb161 WatchSource:0}: Error finding container 4ae0e1de3412c796d3d68a2829eb8d10db67d7d283c875085dc601dba5beb161: Status 404 returned error can't find the container with id 4ae0e1de3412c796d3d68a2829eb8d10db67d7d283c875085dc601dba5beb161 Jan 27 13:22:01 crc kubenswrapper[4786]: I0127 13:21:59.655953 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f8lw7" event={"ID":"0ffbeb23-d6ac-4a28-81a5-052f7c2c8618","Type":"ContainerStarted","Data":"c26cca417081396a35e6ee42421fd58df36e8cdc331064ae0e062c2518563e2e"} Jan 27 13:22:01 crc kubenswrapper[4786]: I0127 13:21:59.656950 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwqdq" event={"ID":"0a3d249e-e994-4e5d-9970-04c4977f28c9","Type":"ContainerStarted","Data":"6441e2f76d3b0c2012f22918fe838c2bb7e1847c74f60a73d9b336452f8f0600"} Jan 27 13:22:01 crc kubenswrapper[4786]: I0127 13:21:59.657901 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-qsx4x" event={"ID":"fb3825b5-f83e-4064-ac93-19ee9e441b42","Type":"ContainerStarted","Data":"4ae0e1de3412c796d3d68a2829eb8d10db67d7d283c875085dc601dba5beb161"} Jan 27 13:22:01 crc kubenswrapper[4786]: I0127 13:22:00.261662 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-memberlist\") pod \"speaker-92hjt\" (UID: \"987590ac-66b7-4ab0-8c6b-b72bbd04bab2\") " pod="metallb-system/speaker-92hjt" Jan 27 13:22:01 crc kubenswrapper[4786]: I0127 13:22:00.267198 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/987590ac-66b7-4ab0-8c6b-b72bbd04bab2-memberlist\") pod \"speaker-92hjt\" (UID: \"987590ac-66b7-4ab0-8c6b-b72bbd04bab2\") " pod="metallb-system/speaker-92hjt" Jan 27 13:22:01 crc kubenswrapper[4786]: I0127 13:22:00.317795 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-92hjt" Jan 27 13:22:01 crc kubenswrapper[4786]: W0127 13:22:00.341725 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod987590ac_66b7_4ab0_8c6b_b72bbd04bab2.slice/crio-e542c9ae6f0b8e08729843ea1e865a4407f0a9bfb64c5790e80a38d3d019f1b7 WatchSource:0}: Error finding container e542c9ae6f0b8e08729843ea1e865a4407f0a9bfb64c5790e80a38d3d019f1b7: Status 404 returned error can't find the container with id e542c9ae6f0b8e08729843ea1e865a4407f0a9bfb64c5790e80a38d3d019f1b7 Jan 27 13:22:01 crc kubenswrapper[4786]: I0127 13:22:00.669633 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-qsx4x" event={"ID":"fb3825b5-f83e-4064-ac93-19ee9e441b42","Type":"ContainerStarted","Data":"107e6da96fdf068dc27e402f573bb6bf33f4ebd3225711c8a901047e75291e7d"} Jan 27 13:22:01 crc kubenswrapper[4786]: I0127 13:22:00.669991 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-qsx4x" event={"ID":"fb3825b5-f83e-4064-ac93-19ee9e441b42","Type":"ContainerStarted","Data":"65a47e6828dd3b6dc0df997d911bc4bc66843399a038b64016b46ca48815ccae"} Jan 27 13:22:01 crc kubenswrapper[4786]: I0127 13:22:00.670043 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-qsx4x" Jan 27 13:22:01 crc kubenswrapper[4786]: I0127 13:22:00.678353 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-92hjt" event={"ID":"987590ac-66b7-4ab0-8c6b-b72bbd04bab2","Type":"ContainerStarted","Data":"f43c79586f36e6e6d798530635b4a220daae54c3a53656689a4cfd43d8e47b45"} Jan 27 13:22:01 crc kubenswrapper[4786]: I0127 13:22:00.678399 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-92hjt" event={"ID":"987590ac-66b7-4ab0-8c6b-b72bbd04bab2","Type":"ContainerStarted","Data":"e542c9ae6f0b8e08729843ea1e865a4407f0a9bfb64c5790e80a38d3d019f1b7"} Jan 27 13:22:01 crc kubenswrapper[4786]: I0127 13:22:00.691834 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-qsx4x" podStartSLOduration=2.691813636 podStartE2EDuration="2.691813636s" podCreationTimestamp="2026-01-27 13:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:22:00.689080331 +0000 UTC m=+903.899694480" watchObservedRunningTime="2026-01-27 13:22:00.691813636 +0000 UTC m=+903.902427765" Jan 27 13:22:01 crc kubenswrapper[4786]: I0127 13:22:01.689359 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-92hjt" event={"ID":"987590ac-66b7-4ab0-8c6b-b72bbd04bab2","Type":"ContainerStarted","Data":"58ae37b67c4b9981815c310df6ee963d1ff745f34b598135f4f9a5df368135a2"} Jan 27 13:22:01 crc kubenswrapper[4786]: I0127 13:22:01.711442 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-92hjt" podStartSLOduration=3.711420393 podStartE2EDuration="3.711420393s" podCreationTimestamp="2026-01-27 13:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:22:01.707063423 +0000 UTC m=+904.917677552" watchObservedRunningTime="2026-01-27 13:22:01.711420393 +0000 UTC m=+904.922034512" Jan 27 13:22:02 crc kubenswrapper[4786]: I0127 13:22:02.703035 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-92hjt" Jan 27 13:22:04 crc kubenswrapper[4786]: I0127 13:22:04.114171 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5hlf2"] Jan 27 13:22:04 crc kubenswrapper[4786]: I0127 13:22:04.115403 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:04 crc kubenswrapper[4786]: I0127 13:22:04.133952 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5hlf2"] Jan 27 13:22:04 crc kubenswrapper[4786]: I0127 13:22:04.137960 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77p8t\" (UniqueName: \"kubernetes.io/projected/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-kube-api-access-77p8t\") pod \"certified-operators-5hlf2\" (UID: \"e4a46447-c8f5-4e7f-8033-5ba07e5d942f\") " pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:04 crc kubenswrapper[4786]: I0127 13:22:04.138021 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-catalog-content\") pod \"certified-operators-5hlf2\" (UID: \"e4a46447-c8f5-4e7f-8033-5ba07e5d942f\") " pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:04 crc kubenswrapper[4786]: I0127 13:22:04.138062 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-utilities\") pod \"certified-operators-5hlf2\" (UID: \"e4a46447-c8f5-4e7f-8033-5ba07e5d942f\") " pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:04 crc kubenswrapper[4786]: I0127 13:22:04.239199 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-utilities\") pod \"certified-operators-5hlf2\" (UID: \"e4a46447-c8f5-4e7f-8033-5ba07e5d942f\") " pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:04 crc kubenswrapper[4786]: I0127 13:22:04.239296 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77p8t\" (UniqueName: \"kubernetes.io/projected/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-kube-api-access-77p8t\") pod \"certified-operators-5hlf2\" (UID: \"e4a46447-c8f5-4e7f-8033-5ba07e5d942f\") " pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:04 crc kubenswrapper[4786]: I0127 13:22:04.239328 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-catalog-content\") pod \"certified-operators-5hlf2\" (UID: \"e4a46447-c8f5-4e7f-8033-5ba07e5d942f\") " pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:04 crc kubenswrapper[4786]: I0127 13:22:04.239764 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-utilities\") pod \"certified-operators-5hlf2\" (UID: \"e4a46447-c8f5-4e7f-8033-5ba07e5d942f\") " pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:04 crc kubenswrapper[4786]: I0127 13:22:04.239990 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-catalog-content\") pod \"certified-operators-5hlf2\" (UID: \"e4a46447-c8f5-4e7f-8033-5ba07e5d942f\") " pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:04 crc kubenswrapper[4786]: I0127 13:22:04.262000 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77p8t\" (UniqueName: \"kubernetes.io/projected/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-kube-api-access-77p8t\") pod \"certified-operators-5hlf2\" (UID: \"e4a46447-c8f5-4e7f-8033-5ba07e5d942f\") " pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:04 crc kubenswrapper[4786]: I0127 13:22:04.432708 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:07 crc kubenswrapper[4786]: I0127 13:22:07.987697 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5hlf2"] Jan 27 13:22:08 crc kubenswrapper[4786]: I0127 13:22:08.738324 4786 generic.go:334] "Generic (PLEG): container finished" podID="0a3d249e-e994-4e5d-9970-04c4977f28c9" containerID="b9565102bb0ae8f5e25234a85e3a7cac5eb27ecd00d23a13bd0eff571bde5ce1" exitCode=0 Jan 27 13:22:08 crc kubenswrapper[4786]: I0127 13:22:08.738532 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwqdq" event={"ID":"0a3d249e-e994-4e5d-9970-04c4977f28c9","Type":"ContainerDied","Data":"b9565102bb0ae8f5e25234a85e3a7cac5eb27ecd00d23a13bd0eff571bde5ce1"} Jan 27 13:22:08 crc kubenswrapper[4786]: I0127 13:22:08.740139 4786 generic.go:334] "Generic (PLEG): container finished" podID="e4a46447-c8f5-4e7f-8033-5ba07e5d942f" containerID="17878d3943fda0c0d53e7c1055805d9164e7aca7fbe456b3821d16dc6574c8f0" exitCode=0 Jan 27 13:22:08 crc kubenswrapper[4786]: I0127 13:22:08.740189 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hlf2" event={"ID":"e4a46447-c8f5-4e7f-8033-5ba07e5d942f","Type":"ContainerDied","Data":"17878d3943fda0c0d53e7c1055805d9164e7aca7fbe456b3821d16dc6574c8f0"} Jan 27 13:22:08 crc kubenswrapper[4786]: I0127 13:22:08.740210 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hlf2" event={"ID":"e4a46447-c8f5-4e7f-8033-5ba07e5d942f","Type":"ContainerStarted","Data":"9a4730d219de918140e3cf2dbc3c470a81a8ffa06a76f91832a6ce1428fd1933"} Jan 27 13:22:08 crc kubenswrapper[4786]: I0127 13:22:08.743152 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f8lw7" event={"ID":"0ffbeb23-d6ac-4a28-81a5-052f7c2c8618","Type":"ContainerStarted","Data":"e5ba6a86708ac8e6622b763b127db74636aeb2c3f862aed90b9b3f86b8751a4f"} Jan 27 13:22:08 crc kubenswrapper[4786]: I0127 13:22:08.743279 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f8lw7" Jan 27 13:22:08 crc kubenswrapper[4786]: I0127 13:22:08.788094 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f8lw7" podStartSLOduration=1.635122006 podStartE2EDuration="10.788075537s" podCreationTimestamp="2026-01-27 13:21:58 +0000 UTC" firstStartedPulling="2026-01-27 13:21:58.96988778 +0000 UTC m=+902.180501889" lastFinishedPulling="2026-01-27 13:22:08.122841301 +0000 UTC m=+911.333455420" observedRunningTime="2026-01-27 13:22:08.784210151 +0000 UTC m=+911.994824290" watchObservedRunningTime="2026-01-27 13:22:08.788075537 +0000 UTC m=+911.998689656" Jan 27 13:22:09 crc kubenswrapper[4786]: I0127 13:22:09.429737 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-qsx4x" Jan 27 13:22:09 crc kubenswrapper[4786]: I0127 13:22:09.752625 4786 generic.go:334] "Generic (PLEG): container finished" podID="0a3d249e-e994-4e5d-9970-04c4977f28c9" containerID="f72d95848107c8d53708f00869dfad0cf3fd76ebec37b1f6802fb717359f0c07" exitCode=0 Jan 27 13:22:09 crc kubenswrapper[4786]: I0127 13:22:09.752744 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwqdq" event={"ID":"0a3d249e-e994-4e5d-9970-04c4977f28c9","Type":"ContainerDied","Data":"f72d95848107c8d53708f00869dfad0cf3fd76ebec37b1f6802fb717359f0c07"} Jan 27 13:22:10 crc kubenswrapper[4786]: I0127 13:22:10.321110 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-92hjt" Jan 27 13:22:10 crc kubenswrapper[4786]: I0127 13:22:10.760579 4786 generic.go:334] "Generic (PLEG): container finished" podID="e4a46447-c8f5-4e7f-8033-5ba07e5d942f" containerID="436d024b7b60e5eea25a08382295a5e09d6f5a09aa679d7d4515bfec671a0031" exitCode=0 Jan 27 13:22:10 crc kubenswrapper[4786]: I0127 13:22:10.760661 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hlf2" event={"ID":"e4a46447-c8f5-4e7f-8033-5ba07e5d942f","Type":"ContainerDied","Data":"436d024b7b60e5eea25a08382295a5e09d6f5a09aa679d7d4515bfec671a0031"} Jan 27 13:22:10 crc kubenswrapper[4786]: I0127 13:22:10.816203 4786 generic.go:334] "Generic (PLEG): container finished" podID="0a3d249e-e994-4e5d-9970-04c4977f28c9" containerID="6b0f2a7f589c8209dc52cbe6fd69b9d6e85f6332462ad8a8d06da5f93bbaa1a6" exitCode=0 Jan 27 13:22:10 crc kubenswrapper[4786]: I0127 13:22:10.816444 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwqdq" event={"ID":"0a3d249e-e994-4e5d-9970-04c4977f28c9","Type":"ContainerDied","Data":"6b0f2a7f589c8209dc52cbe6fd69b9d6e85f6332462ad8a8d06da5f93bbaa1a6"} Jan 27 13:22:11 crc kubenswrapper[4786]: I0127 13:22:11.652646 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49"] Jan 27 13:22:11 crc kubenswrapper[4786]: I0127 13:22:11.653824 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" Jan 27 13:22:11 crc kubenswrapper[4786]: I0127 13:22:11.655995 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 13:22:11 crc kubenswrapper[4786]: I0127 13:22:11.665699 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49"] Jan 27 13:22:11 crc kubenswrapper[4786]: I0127 13:22:11.764173 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z72zr\" (UniqueName: \"kubernetes.io/projected/f397bffc-b155-4fbd-896c-82c4a3d83f3e-kube-api-access-z72zr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49\" (UID: \"f397bffc-b155-4fbd-896c-82c4a3d83f3e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" Jan 27 13:22:11 crc kubenswrapper[4786]: I0127 13:22:11.764214 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f397bffc-b155-4fbd-896c-82c4a3d83f3e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49\" (UID: \"f397bffc-b155-4fbd-896c-82c4a3d83f3e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" Jan 27 13:22:11 crc kubenswrapper[4786]: I0127 13:22:11.764288 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f397bffc-b155-4fbd-896c-82c4a3d83f3e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49\" (UID: \"f397bffc-b155-4fbd-896c-82c4a3d83f3e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" Jan 27 13:22:11 crc kubenswrapper[4786]: I0127 13:22:11.824129 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwqdq" event={"ID":"0a3d249e-e994-4e5d-9970-04c4977f28c9","Type":"ContainerStarted","Data":"26cee2783b056c66d7c928530c38f658c8cdd4b9ab529bfad432d4fce21558f6"} Jan 27 13:22:11 crc kubenswrapper[4786]: I0127 13:22:11.824166 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwqdq" event={"ID":"0a3d249e-e994-4e5d-9970-04c4977f28c9","Type":"ContainerStarted","Data":"2a4359c8d62f5bab23376371c4128a4f297351932501304ef69642fd57f52d6e"} Jan 27 13:22:11 crc kubenswrapper[4786]: I0127 13:22:11.865352 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f397bffc-b155-4fbd-896c-82c4a3d83f3e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49\" (UID: \"f397bffc-b155-4fbd-896c-82c4a3d83f3e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" Jan 27 13:22:11 crc kubenswrapper[4786]: I0127 13:22:11.865432 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z72zr\" (UniqueName: \"kubernetes.io/projected/f397bffc-b155-4fbd-896c-82c4a3d83f3e-kube-api-access-z72zr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49\" (UID: \"f397bffc-b155-4fbd-896c-82c4a3d83f3e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" Jan 27 13:22:11 crc kubenswrapper[4786]: I0127 13:22:11.865464 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f397bffc-b155-4fbd-896c-82c4a3d83f3e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49\" (UID: \"f397bffc-b155-4fbd-896c-82c4a3d83f3e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" Jan 27 13:22:11 crc kubenswrapper[4786]: I0127 13:22:11.865796 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f397bffc-b155-4fbd-896c-82c4a3d83f3e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49\" (UID: \"f397bffc-b155-4fbd-896c-82c4a3d83f3e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" Jan 27 13:22:11 crc kubenswrapper[4786]: I0127 13:22:11.865915 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f397bffc-b155-4fbd-896c-82c4a3d83f3e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49\" (UID: \"f397bffc-b155-4fbd-896c-82c4a3d83f3e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" Jan 27 13:22:11 crc kubenswrapper[4786]: I0127 13:22:11.897241 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z72zr\" (UniqueName: \"kubernetes.io/projected/f397bffc-b155-4fbd-896c-82c4a3d83f3e-kube-api-access-z72zr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49\" (UID: \"f397bffc-b155-4fbd-896c-82c4a3d83f3e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" Jan 27 13:22:11 crc kubenswrapper[4786]: I0127 13:22:11.967765 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" Jan 27 13:22:12 crc kubenswrapper[4786]: I0127 13:22:12.453565 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49"] Jan 27 13:22:12 crc kubenswrapper[4786]: W0127 13:22:12.471947 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf397bffc_b155_4fbd_896c_82c4a3d83f3e.slice/crio-ad4d00b2d35d138e23714fb8166cfb62f670518e153d2ae5cebd6acf31c2f9ef WatchSource:0}: Error finding container ad4d00b2d35d138e23714fb8166cfb62f670518e153d2ae5cebd6acf31c2f9ef: Status 404 returned error can't find the container with id ad4d00b2d35d138e23714fb8166cfb62f670518e153d2ae5cebd6acf31c2f9ef Jan 27 13:22:12 crc kubenswrapper[4786]: I0127 13:22:12.830552 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" event={"ID":"f397bffc-b155-4fbd-896c-82c4a3d83f3e","Type":"ContainerStarted","Data":"fab01150efa18c732e39891113c0ecc35c4d40c06917ab14cdc145c459dc0990"} Jan 27 13:22:12 crc kubenswrapper[4786]: I0127 13:22:12.830920 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" event={"ID":"f397bffc-b155-4fbd-896c-82c4a3d83f3e","Type":"ContainerStarted","Data":"ad4d00b2d35d138e23714fb8166cfb62f670518e153d2ae5cebd6acf31c2f9ef"} Jan 27 13:22:12 crc kubenswrapper[4786]: I0127 13:22:12.836777 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwqdq" event={"ID":"0a3d249e-e994-4e5d-9970-04c4977f28c9","Type":"ContainerStarted","Data":"98f320a7d9875007831df84ee06acdf0740b4da4ec53fd26772169ca1611cbf9"} Jan 27 13:22:12 crc kubenswrapper[4786]: I0127 13:22:12.836821 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwqdq" event={"ID":"0a3d249e-e994-4e5d-9970-04c4977f28c9","Type":"ContainerStarted","Data":"8679e543197981ce6494dd8a32bdb51bafb773eb68a6e4920924fba0132a458f"} Jan 27 13:22:12 crc kubenswrapper[4786]: I0127 13:22:12.839224 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hlf2" event={"ID":"e4a46447-c8f5-4e7f-8033-5ba07e5d942f","Type":"ContainerStarted","Data":"f04b9b13833dde907a858910c0f87730bf9f2dd3d0876c1a6dddc8f959e213c0"} Jan 27 13:22:12 crc kubenswrapper[4786]: I0127 13:22:12.877920 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5hlf2" podStartSLOduration=5.631397357 podStartE2EDuration="8.877900306s" podCreationTimestamp="2026-01-27 13:22:04 +0000 UTC" firstStartedPulling="2026-01-27 13:22:08.741926985 +0000 UTC m=+911.952541104" lastFinishedPulling="2026-01-27 13:22:11.988429934 +0000 UTC m=+915.199044053" observedRunningTime="2026-01-27 13:22:12.872289993 +0000 UTC m=+916.082904132" watchObservedRunningTime="2026-01-27 13:22:12.877900306 +0000 UTC m=+916.088514425" Jan 27 13:22:13 crc kubenswrapper[4786]: I0127 13:22:13.850571 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwqdq" event={"ID":"0a3d249e-e994-4e5d-9970-04c4977f28c9","Type":"ContainerStarted","Data":"b279fa487fe41cd7cdeba4efbf5acf1bf42c778d83739439983c448f701f7f54"} Jan 27 13:22:13 crc kubenswrapper[4786]: I0127 13:22:13.851108 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwqdq" event={"ID":"0a3d249e-e994-4e5d-9970-04c4977f28c9","Type":"ContainerStarted","Data":"b1052a2c9d0a0a1dab2b45c40910adb17eede82a5b9c22825229030d354cb574"} Jan 27 13:22:13 crc kubenswrapper[4786]: I0127 13:22:13.851133 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:22:13 crc kubenswrapper[4786]: I0127 13:22:13.854740 4786 generic.go:334] "Generic (PLEG): container finished" podID="f397bffc-b155-4fbd-896c-82c4a3d83f3e" containerID="fab01150efa18c732e39891113c0ecc35c4d40c06917ab14cdc145c459dc0990" exitCode=0 Jan 27 13:22:13 crc kubenswrapper[4786]: I0127 13:22:13.854830 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" event={"ID":"f397bffc-b155-4fbd-896c-82c4a3d83f3e","Type":"ContainerDied","Data":"fab01150efa18c732e39891113c0ecc35c4d40c06917ab14cdc145c459dc0990"} Jan 27 13:22:13 crc kubenswrapper[4786]: I0127 13:22:13.883993 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hwqdq" podStartSLOduration=6.7324883920000005 podStartE2EDuration="15.883964233s" podCreationTimestamp="2026-01-27 13:21:58 +0000 UTC" firstStartedPulling="2026-01-27 13:21:58.947633152 +0000 UTC m=+902.158247271" lastFinishedPulling="2026-01-27 13:22:08.099108993 +0000 UTC m=+911.309723112" observedRunningTime="2026-01-27 13:22:13.876774876 +0000 UTC m=+917.087389015" watchObservedRunningTime="2026-01-27 13:22:13.883964233 +0000 UTC m=+917.094578352" Jan 27 13:22:14 crc kubenswrapper[4786]: I0127 13:22:14.434038 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:14 crc kubenswrapper[4786]: I0127 13:22:14.434273 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:14 crc kubenswrapper[4786]: I0127 13:22:14.474208 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:18 crc kubenswrapper[4786]: I0127 13:22:18.740209 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f8lw7" Jan 27 13:22:18 crc kubenswrapper[4786]: I0127 13:22:18.765855 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:22:18 crc kubenswrapper[4786]: I0127 13:22:18.807078 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:22:18 crc kubenswrapper[4786]: I0127 13:22:18.820431 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zxdgz"] Jan 27 13:22:18 crc kubenswrapper[4786]: I0127 13:22:18.822146 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:18 crc kubenswrapper[4786]: I0127 13:22:18.831114 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxdgz"] Jan 27 13:22:18 crc kubenswrapper[4786]: I0127 13:22:18.858067 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/708f9703-578e-44d0-accc-34807bb16f4a-catalog-content\") pod \"community-operators-zxdgz\" (UID: \"708f9703-578e-44d0-accc-34807bb16f4a\") " pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:18 crc kubenswrapper[4786]: I0127 13:22:18.858113 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/708f9703-578e-44d0-accc-34807bb16f4a-utilities\") pod \"community-operators-zxdgz\" (UID: \"708f9703-578e-44d0-accc-34807bb16f4a\") " pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:18 crc kubenswrapper[4786]: I0127 13:22:18.858143 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clvq4\" (UniqueName: \"kubernetes.io/projected/708f9703-578e-44d0-accc-34807bb16f4a-kube-api-access-clvq4\") pod \"community-operators-zxdgz\" (UID: \"708f9703-578e-44d0-accc-34807bb16f4a\") " pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:18 crc kubenswrapper[4786]: I0127 13:22:18.959280 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/708f9703-578e-44d0-accc-34807bb16f4a-catalog-content\") pod \"community-operators-zxdgz\" (UID: \"708f9703-578e-44d0-accc-34807bb16f4a\") " pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:18 crc kubenswrapper[4786]: I0127 13:22:18.959325 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/708f9703-578e-44d0-accc-34807bb16f4a-utilities\") pod \"community-operators-zxdgz\" (UID: \"708f9703-578e-44d0-accc-34807bb16f4a\") " pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:18 crc kubenswrapper[4786]: I0127 13:22:18.959350 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clvq4\" (UniqueName: \"kubernetes.io/projected/708f9703-578e-44d0-accc-34807bb16f4a-kube-api-access-clvq4\") pod \"community-operators-zxdgz\" (UID: \"708f9703-578e-44d0-accc-34807bb16f4a\") " pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:18 crc kubenswrapper[4786]: I0127 13:22:18.959983 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/708f9703-578e-44d0-accc-34807bb16f4a-utilities\") pod \"community-operators-zxdgz\" (UID: \"708f9703-578e-44d0-accc-34807bb16f4a\") " pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:18 crc kubenswrapper[4786]: I0127 13:22:18.960012 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/708f9703-578e-44d0-accc-34807bb16f4a-catalog-content\") pod \"community-operators-zxdgz\" (UID: \"708f9703-578e-44d0-accc-34807bb16f4a\") " pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:18 crc kubenswrapper[4786]: I0127 13:22:18.980636 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clvq4\" (UniqueName: \"kubernetes.io/projected/708f9703-578e-44d0-accc-34807bb16f4a-kube-api-access-clvq4\") pod \"community-operators-zxdgz\" (UID: \"708f9703-578e-44d0-accc-34807bb16f4a\") " pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:19 crc kubenswrapper[4786]: I0127 13:22:19.143385 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:23 crc kubenswrapper[4786]: I0127 13:22:23.929498 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxdgz"] Jan 27 13:22:24 crc kubenswrapper[4786]: I0127 13:22:24.473349 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:27 crc kubenswrapper[4786]: I0127 13:22:27.403281 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5hlf2"] Jan 27 13:22:27 crc kubenswrapper[4786]: I0127 13:22:27.403767 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5hlf2" podUID="e4a46447-c8f5-4e7f-8033-5ba07e5d942f" containerName="registry-server" containerID="cri-o://f04b9b13833dde907a858910c0f87730bf9f2dd3d0876c1a6dddc8f959e213c0" gracePeriod=2 Jan 27 13:22:28 crc kubenswrapper[4786]: W0127 13:22:28.267311 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod708f9703_578e_44d0_accc_34807bb16f4a.slice/crio-702463baa9e78abfb66c00a67fca58d3d144d88737ac9b04b93533380fa93931 WatchSource:0}: Error finding container 702463baa9e78abfb66c00a67fca58d3d144d88737ac9b04b93533380fa93931: Status 404 returned error can't find the container with id 702463baa9e78abfb66c00a67fca58d3d144d88737ac9b04b93533380fa93931 Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.512930 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.579979 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77p8t\" (UniqueName: \"kubernetes.io/projected/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-kube-api-access-77p8t\") pod \"e4a46447-c8f5-4e7f-8033-5ba07e5d942f\" (UID: \"e4a46447-c8f5-4e7f-8033-5ba07e5d942f\") " Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.580105 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-utilities\") pod \"e4a46447-c8f5-4e7f-8033-5ba07e5d942f\" (UID: \"e4a46447-c8f5-4e7f-8033-5ba07e5d942f\") " Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.580127 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-catalog-content\") pod \"e4a46447-c8f5-4e7f-8033-5ba07e5d942f\" (UID: \"e4a46447-c8f5-4e7f-8033-5ba07e5d942f\") " Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.582233 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-utilities" (OuterVolumeSpecName: "utilities") pod "e4a46447-c8f5-4e7f-8033-5ba07e5d942f" (UID: "e4a46447-c8f5-4e7f-8033-5ba07e5d942f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.586013 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-kube-api-access-77p8t" (OuterVolumeSpecName: "kube-api-access-77p8t") pod "e4a46447-c8f5-4e7f-8033-5ba07e5d942f" (UID: "e4a46447-c8f5-4e7f-8033-5ba07e5d942f"). InnerVolumeSpecName "kube-api-access-77p8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.622369 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4a46447-c8f5-4e7f-8033-5ba07e5d942f" (UID: "e4a46447-c8f5-4e7f-8033-5ba07e5d942f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.682068 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.682122 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.682136 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77p8t\" (UniqueName: \"kubernetes.io/projected/e4a46447-c8f5-4e7f-8033-5ba07e5d942f-kube-api-access-77p8t\") on node \"crc\" DevicePath \"\"" Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.767670 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hwqdq" Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.949233 4786 generic.go:334] "Generic (PLEG): container finished" podID="f397bffc-b155-4fbd-896c-82c4a3d83f3e" containerID="b8d2131669227e6adde7c37177ab62f86b2a0b6e08e94d9f964547d1e76bbbe8" exitCode=0 Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.949283 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" event={"ID":"f397bffc-b155-4fbd-896c-82c4a3d83f3e","Type":"ContainerDied","Data":"b8d2131669227e6adde7c37177ab62f86b2a0b6e08e94d9f964547d1e76bbbe8"} Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.952457 4786 generic.go:334] "Generic (PLEG): container finished" podID="708f9703-578e-44d0-accc-34807bb16f4a" containerID="e7ab5f95f02551079ef6ae5331d71524c7cf4308425e2db8ebd243469bb068c1" exitCode=0 Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.952518 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxdgz" event={"ID":"708f9703-578e-44d0-accc-34807bb16f4a","Type":"ContainerDied","Data":"e7ab5f95f02551079ef6ae5331d71524c7cf4308425e2db8ebd243469bb068c1"} Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.952545 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxdgz" event={"ID":"708f9703-578e-44d0-accc-34807bb16f4a","Type":"ContainerStarted","Data":"702463baa9e78abfb66c00a67fca58d3d144d88737ac9b04b93533380fa93931"} Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.955394 4786 generic.go:334] "Generic (PLEG): container finished" podID="e4a46447-c8f5-4e7f-8033-5ba07e5d942f" containerID="f04b9b13833dde907a858910c0f87730bf9f2dd3d0876c1a6dddc8f959e213c0" exitCode=0 Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.955434 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hlf2" event={"ID":"e4a46447-c8f5-4e7f-8033-5ba07e5d942f","Type":"ContainerDied","Data":"f04b9b13833dde907a858910c0f87730bf9f2dd3d0876c1a6dddc8f959e213c0"} Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.955461 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hlf2" event={"ID":"e4a46447-c8f5-4e7f-8033-5ba07e5d942f","Type":"ContainerDied","Data":"9a4730d219de918140e3cf2dbc3c470a81a8ffa06a76f91832a6ce1428fd1933"} Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.955482 4786 scope.go:117] "RemoveContainer" containerID="f04b9b13833dde907a858910c0f87730bf9f2dd3d0876c1a6dddc8f959e213c0" Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.955636 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hlf2" Jan 27 13:22:28 crc kubenswrapper[4786]: I0127 13:22:28.975631 4786 scope.go:117] "RemoveContainer" containerID="436d024b7b60e5eea25a08382295a5e09d6f5a09aa679d7d4515bfec671a0031" Jan 27 13:22:29 crc kubenswrapper[4786]: I0127 13:22:29.002993 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5hlf2"] Jan 27 13:22:29 crc kubenswrapper[4786]: I0127 13:22:29.005007 4786 scope.go:117] "RemoveContainer" containerID="17878d3943fda0c0d53e7c1055805d9164e7aca7fbe456b3821d16dc6574c8f0" Jan 27 13:22:29 crc kubenswrapper[4786]: I0127 13:22:29.007818 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5hlf2"] Jan 27 13:22:29 crc kubenswrapper[4786]: I0127 13:22:29.018402 4786 scope.go:117] "RemoveContainer" containerID="f04b9b13833dde907a858910c0f87730bf9f2dd3d0876c1a6dddc8f959e213c0" Jan 27 13:22:29 crc kubenswrapper[4786]: E0127 13:22:29.018852 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04b9b13833dde907a858910c0f87730bf9f2dd3d0876c1a6dddc8f959e213c0\": container with ID starting with f04b9b13833dde907a858910c0f87730bf9f2dd3d0876c1a6dddc8f959e213c0 not found: ID does not exist" containerID="f04b9b13833dde907a858910c0f87730bf9f2dd3d0876c1a6dddc8f959e213c0" Jan 27 13:22:29 crc kubenswrapper[4786]: I0127 13:22:29.018883 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04b9b13833dde907a858910c0f87730bf9f2dd3d0876c1a6dddc8f959e213c0"} err="failed to get container status \"f04b9b13833dde907a858910c0f87730bf9f2dd3d0876c1a6dddc8f959e213c0\": rpc error: code = NotFound desc = could not find container \"f04b9b13833dde907a858910c0f87730bf9f2dd3d0876c1a6dddc8f959e213c0\": container with ID starting with f04b9b13833dde907a858910c0f87730bf9f2dd3d0876c1a6dddc8f959e213c0 not found: ID does not exist" Jan 27 13:22:29 crc kubenswrapper[4786]: I0127 13:22:29.018902 4786 scope.go:117] "RemoveContainer" containerID="436d024b7b60e5eea25a08382295a5e09d6f5a09aa679d7d4515bfec671a0031" Jan 27 13:22:29 crc kubenswrapper[4786]: E0127 13:22:29.019252 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"436d024b7b60e5eea25a08382295a5e09d6f5a09aa679d7d4515bfec671a0031\": container with ID starting with 436d024b7b60e5eea25a08382295a5e09d6f5a09aa679d7d4515bfec671a0031 not found: ID does not exist" containerID="436d024b7b60e5eea25a08382295a5e09d6f5a09aa679d7d4515bfec671a0031" Jan 27 13:22:29 crc kubenswrapper[4786]: I0127 13:22:29.019290 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"436d024b7b60e5eea25a08382295a5e09d6f5a09aa679d7d4515bfec671a0031"} err="failed to get container status \"436d024b7b60e5eea25a08382295a5e09d6f5a09aa679d7d4515bfec671a0031\": rpc error: code = NotFound desc = could not find container \"436d024b7b60e5eea25a08382295a5e09d6f5a09aa679d7d4515bfec671a0031\": container with ID starting with 436d024b7b60e5eea25a08382295a5e09d6f5a09aa679d7d4515bfec671a0031 not found: ID does not exist" Jan 27 13:22:29 crc kubenswrapper[4786]: I0127 13:22:29.019329 4786 scope.go:117] "RemoveContainer" containerID="17878d3943fda0c0d53e7c1055805d9164e7aca7fbe456b3821d16dc6574c8f0" Jan 27 13:22:29 crc kubenswrapper[4786]: E0127 13:22:29.019623 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17878d3943fda0c0d53e7c1055805d9164e7aca7fbe456b3821d16dc6574c8f0\": container with ID starting with 17878d3943fda0c0d53e7c1055805d9164e7aca7fbe456b3821d16dc6574c8f0 not found: ID does not exist" containerID="17878d3943fda0c0d53e7c1055805d9164e7aca7fbe456b3821d16dc6574c8f0" Jan 27 13:22:29 crc kubenswrapper[4786]: I0127 13:22:29.019649 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17878d3943fda0c0d53e7c1055805d9164e7aca7fbe456b3821d16dc6574c8f0"} err="failed to get container status \"17878d3943fda0c0d53e7c1055805d9164e7aca7fbe456b3821d16dc6574c8f0\": rpc error: code = NotFound desc = could not find container \"17878d3943fda0c0d53e7c1055805d9164e7aca7fbe456b3821d16dc6574c8f0\": container with ID starting with 17878d3943fda0c0d53e7c1055805d9164e7aca7fbe456b3821d16dc6574c8f0 not found: ID does not exist" Jan 27 13:22:29 crc kubenswrapper[4786]: I0127 13:22:29.472186 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a46447-c8f5-4e7f-8033-5ba07e5d942f" path="/var/lib/kubelet/pods/e4a46447-c8f5-4e7f-8033-5ba07e5d942f/volumes" Jan 27 13:22:29 crc kubenswrapper[4786]: I0127 13:22:29.965325 4786 generic.go:334] "Generic (PLEG): container finished" podID="f397bffc-b155-4fbd-896c-82c4a3d83f3e" containerID="f0e45c46e7beaacd88a290a0b5b5f6d790b61aa733ec99c8afc99ecc261a5204" exitCode=0 Jan 27 13:22:29 crc kubenswrapper[4786]: I0127 13:22:29.965382 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" event={"ID":"f397bffc-b155-4fbd-896c-82c4a3d83f3e","Type":"ContainerDied","Data":"f0e45c46e7beaacd88a290a0b5b5f6d790b61aa733ec99c8afc99ecc261a5204"} Jan 27 13:22:30 crc kubenswrapper[4786]: I0127 13:22:30.973016 4786 generic.go:334] "Generic (PLEG): container finished" podID="708f9703-578e-44d0-accc-34807bb16f4a" containerID="42fcc2129e4358a529ca66edb9cef1f8cc3d3de1e77a862e19458b5a36d8cc64" exitCode=0 Jan 27 13:22:30 crc kubenswrapper[4786]: I0127 13:22:30.973142 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxdgz" event={"ID":"708f9703-578e-44d0-accc-34807bb16f4a","Type":"ContainerDied","Data":"42fcc2129e4358a529ca66edb9cef1f8cc3d3de1e77a862e19458b5a36d8cc64"} Jan 27 13:22:31 crc kubenswrapper[4786]: I0127 13:22:31.187429 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" Jan 27 13:22:31 crc kubenswrapper[4786]: I0127 13:22:31.318592 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f397bffc-b155-4fbd-896c-82c4a3d83f3e-bundle\") pod \"f397bffc-b155-4fbd-896c-82c4a3d83f3e\" (UID: \"f397bffc-b155-4fbd-896c-82c4a3d83f3e\") " Jan 27 13:22:31 crc kubenswrapper[4786]: I0127 13:22:31.318756 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z72zr\" (UniqueName: \"kubernetes.io/projected/f397bffc-b155-4fbd-896c-82c4a3d83f3e-kube-api-access-z72zr\") pod \"f397bffc-b155-4fbd-896c-82c4a3d83f3e\" (UID: \"f397bffc-b155-4fbd-896c-82c4a3d83f3e\") " Jan 27 13:22:31 crc kubenswrapper[4786]: I0127 13:22:31.318784 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f397bffc-b155-4fbd-896c-82c4a3d83f3e-util\") pod \"f397bffc-b155-4fbd-896c-82c4a3d83f3e\" (UID: \"f397bffc-b155-4fbd-896c-82c4a3d83f3e\") " Jan 27 13:22:31 crc kubenswrapper[4786]: I0127 13:22:31.319671 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f397bffc-b155-4fbd-896c-82c4a3d83f3e-bundle" (OuterVolumeSpecName: "bundle") pod "f397bffc-b155-4fbd-896c-82c4a3d83f3e" (UID: "f397bffc-b155-4fbd-896c-82c4a3d83f3e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:22:31 crc kubenswrapper[4786]: I0127 13:22:31.323738 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f397bffc-b155-4fbd-896c-82c4a3d83f3e-kube-api-access-z72zr" (OuterVolumeSpecName: "kube-api-access-z72zr") pod "f397bffc-b155-4fbd-896c-82c4a3d83f3e" (UID: "f397bffc-b155-4fbd-896c-82c4a3d83f3e"). InnerVolumeSpecName "kube-api-access-z72zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:22:31 crc kubenswrapper[4786]: I0127 13:22:31.330094 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f397bffc-b155-4fbd-896c-82c4a3d83f3e-util" (OuterVolumeSpecName: "util") pod "f397bffc-b155-4fbd-896c-82c4a3d83f3e" (UID: "f397bffc-b155-4fbd-896c-82c4a3d83f3e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:22:31 crc kubenswrapper[4786]: I0127 13:22:31.420179 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z72zr\" (UniqueName: \"kubernetes.io/projected/f397bffc-b155-4fbd-896c-82c4a3d83f3e-kube-api-access-z72zr\") on node \"crc\" DevicePath \"\"" Jan 27 13:22:31 crc kubenswrapper[4786]: I0127 13:22:31.420368 4786 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f397bffc-b155-4fbd-896c-82c4a3d83f3e-util\") on node \"crc\" DevicePath \"\"" Jan 27 13:22:31 crc kubenswrapper[4786]: I0127 13:22:31.420430 4786 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f397bffc-b155-4fbd-896c-82c4a3d83f3e-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:22:31 crc kubenswrapper[4786]: I0127 13:22:31.985194 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" event={"ID":"f397bffc-b155-4fbd-896c-82c4a3d83f3e","Type":"ContainerDied","Data":"ad4d00b2d35d138e23714fb8166cfb62f670518e153d2ae5cebd6acf31c2f9ef"} Jan 27 13:22:31 crc kubenswrapper[4786]: I0127 13:22:31.985467 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad4d00b2d35d138e23714fb8166cfb62f670518e153d2ae5cebd6acf31c2f9ef" Jan 27 13:22:31 crc kubenswrapper[4786]: I0127 13:22:31.985548 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49" Jan 27 13:22:32 crc kubenswrapper[4786]: I0127 13:22:32.995223 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxdgz" event={"ID":"708f9703-578e-44d0-accc-34807bb16f4a","Type":"ContainerStarted","Data":"8d703f3016f0fceac7de9ac73d5d8ec162fb75be5cf13444a48be777d8216f01"} Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.012274 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zxdgz" podStartSLOduration=11.724062396 podStartE2EDuration="15.012035099s" podCreationTimestamp="2026-01-27 13:22:18 +0000 UTC" firstStartedPulling="2026-01-27 13:22:28.954445319 +0000 UTC m=+932.165059438" lastFinishedPulling="2026-01-27 13:22:32.242418022 +0000 UTC m=+935.453032141" observedRunningTime="2026-01-27 13:22:33.011626868 +0000 UTC m=+936.222240977" watchObservedRunningTime="2026-01-27 13:22:33.012035099 +0000 UTC m=+936.222649218" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.817410 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q8jp9"] Jan 27 13:22:33 crc kubenswrapper[4786]: E0127 13:22:33.817984 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a46447-c8f5-4e7f-8033-5ba07e5d942f" containerName="extract-content" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.818005 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a46447-c8f5-4e7f-8033-5ba07e5d942f" containerName="extract-content" Jan 27 13:22:33 crc kubenswrapper[4786]: E0127 13:22:33.818019 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f397bffc-b155-4fbd-896c-82c4a3d83f3e" containerName="extract" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.818027 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f397bffc-b155-4fbd-896c-82c4a3d83f3e" containerName="extract" Jan 27 13:22:33 crc kubenswrapper[4786]: E0127 13:22:33.818043 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f397bffc-b155-4fbd-896c-82c4a3d83f3e" containerName="util" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.818053 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f397bffc-b155-4fbd-896c-82c4a3d83f3e" containerName="util" Jan 27 13:22:33 crc kubenswrapper[4786]: E0127 13:22:33.818063 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a46447-c8f5-4e7f-8033-5ba07e5d942f" containerName="extract-utilities" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.818071 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a46447-c8f5-4e7f-8033-5ba07e5d942f" containerName="extract-utilities" Jan 27 13:22:33 crc kubenswrapper[4786]: E0127 13:22:33.818082 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f397bffc-b155-4fbd-896c-82c4a3d83f3e" containerName="pull" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.818089 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f397bffc-b155-4fbd-896c-82c4a3d83f3e" containerName="pull" Jan 27 13:22:33 crc kubenswrapper[4786]: E0127 13:22:33.818113 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a46447-c8f5-4e7f-8033-5ba07e5d942f" containerName="registry-server" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.818121 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a46447-c8f5-4e7f-8033-5ba07e5d942f" containerName="registry-server" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.818247 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f397bffc-b155-4fbd-896c-82c4a3d83f3e" containerName="extract" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.818261 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a46447-c8f5-4e7f-8033-5ba07e5d942f" containerName="registry-server" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.819311 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.832980 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8jp9"] Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.856956 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-catalog-content\") pod \"redhat-marketplace-q8jp9\" (UID: \"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b\") " pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.857001 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-utilities\") pod \"redhat-marketplace-q8jp9\" (UID: \"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b\") " pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.857041 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwgbr\" (UniqueName: \"kubernetes.io/projected/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-kube-api-access-nwgbr\") pod \"redhat-marketplace-q8jp9\" (UID: \"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b\") " pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.958305 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwgbr\" (UniqueName: \"kubernetes.io/projected/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-kube-api-access-nwgbr\") pod \"redhat-marketplace-q8jp9\" (UID: \"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b\") " pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.958402 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-catalog-content\") pod \"redhat-marketplace-q8jp9\" (UID: \"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b\") " pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.958431 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-utilities\") pod \"redhat-marketplace-q8jp9\" (UID: \"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b\") " pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.958885 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-utilities\") pod \"redhat-marketplace-q8jp9\" (UID: \"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b\") " pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.958951 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-catalog-content\") pod \"redhat-marketplace-q8jp9\" (UID: \"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b\") " pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:33 crc kubenswrapper[4786]: I0127 13:22:33.989049 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwgbr\" (UniqueName: \"kubernetes.io/projected/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-kube-api-access-nwgbr\") pod \"redhat-marketplace-q8jp9\" (UID: \"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b\") " pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:34 crc kubenswrapper[4786]: I0127 13:22:34.135205 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:34 crc kubenswrapper[4786]: I0127 13:22:34.347509 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8jp9"] Jan 27 13:22:34 crc kubenswrapper[4786]: W0127 13:22:34.357119 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32f7bf77_376b_4889_a2a1_c7d0b8b5c93b.slice/crio-f37b7ce73e4ef4f4231c8122b809393022cf30afb7079cec4799409da16e2868 WatchSource:0}: Error finding container f37b7ce73e4ef4f4231c8122b809393022cf30afb7079cec4799409da16e2868: Status 404 returned error can't find the container with id f37b7ce73e4ef4f4231c8122b809393022cf30afb7079cec4799409da16e2868 Jan 27 13:22:35 crc kubenswrapper[4786]: I0127 13:22:35.009345 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8jp9" event={"ID":"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b","Type":"ContainerStarted","Data":"ab9d7c52ee458bee1b411205f2437dd7efe8b230c14c4437cb100a420367f646"} Jan 27 13:22:35 crc kubenswrapper[4786]: I0127 13:22:35.009660 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8jp9" event={"ID":"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b","Type":"ContainerStarted","Data":"f37b7ce73e4ef4f4231c8122b809393022cf30afb7079cec4799409da16e2868"} Jan 27 13:22:36 crc kubenswrapper[4786]: I0127 13:22:36.019507 4786 generic.go:334] "Generic (PLEG): container finished" podID="32f7bf77-376b-4889-a2a1-c7d0b8b5c93b" containerID="ab9d7c52ee458bee1b411205f2437dd7efe8b230c14c4437cb100a420367f646" exitCode=0 Jan 27 13:22:36 crc kubenswrapper[4786]: I0127 13:22:36.019577 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8jp9" event={"ID":"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b","Type":"ContainerDied","Data":"ab9d7c52ee458bee1b411205f2437dd7efe8b230c14c4437cb100a420367f646"} Jan 27 13:22:38 crc kubenswrapper[4786]: I0127 13:22:38.031558 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8jp9" event={"ID":"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b","Type":"ContainerStarted","Data":"321f255bf3183d261d2af7d3240668e228ec824bbde2432e60b9dabf912e5174"} Jan 27 13:22:39 crc kubenswrapper[4786]: I0127 13:22:39.038824 4786 generic.go:334] "Generic (PLEG): container finished" podID="32f7bf77-376b-4889-a2a1-c7d0b8b5c93b" containerID="321f255bf3183d261d2af7d3240668e228ec824bbde2432e60b9dabf912e5174" exitCode=0 Jan 27 13:22:39 crc kubenswrapper[4786]: I0127 13:22:39.039068 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8jp9" event={"ID":"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b","Type":"ContainerDied","Data":"321f255bf3183d261d2af7d3240668e228ec824bbde2432e60b9dabf912e5174"} Jan 27 13:22:39 crc kubenswrapper[4786]: I0127 13:22:39.144300 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:39 crc kubenswrapper[4786]: I0127 13:22:39.144348 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:39 crc kubenswrapper[4786]: I0127 13:22:39.183502 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:39 crc kubenswrapper[4786]: I0127 13:22:39.532776 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:22:39 crc kubenswrapper[4786]: I0127 13:22:39.532846 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:22:40 crc kubenswrapper[4786]: I0127 13:22:40.047912 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8jp9" event={"ID":"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b","Type":"ContainerStarted","Data":"d2bb8ec41f12a1ce7fd708501c5367ebed1278e0a1fa289bbe3827de0b10208f"} Jan 27 13:22:40 crc kubenswrapper[4786]: I0127 13:22:40.066526 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q8jp9" podStartSLOduration=3.629897624 podStartE2EDuration="7.066505797s" podCreationTimestamp="2026-01-27 13:22:33 +0000 UTC" firstStartedPulling="2026-01-27 13:22:36.022778097 +0000 UTC m=+939.233392256" lastFinishedPulling="2026-01-27 13:22:39.4593863 +0000 UTC m=+942.670000429" observedRunningTime="2026-01-27 13:22:40.064094061 +0000 UTC m=+943.274708180" watchObservedRunningTime="2026-01-27 13:22:40.066505797 +0000 UTC m=+943.277119926" Jan 27 13:22:40 crc kubenswrapper[4786]: I0127 13:22:40.087391 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:40 crc kubenswrapper[4786]: I0127 13:22:40.579796 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-6w2cb"] Jan 27 13:22:40 crc kubenswrapper[4786]: I0127 13:22:40.580964 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-6w2cb" Jan 27 13:22:40 crc kubenswrapper[4786]: I0127 13:22:40.583264 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 27 13:22:40 crc kubenswrapper[4786]: I0127 13:22:40.584398 4786 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-8jjsm" Jan 27 13:22:40 crc kubenswrapper[4786]: I0127 13:22:40.584637 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 27 13:22:40 crc kubenswrapper[4786]: I0127 13:22:40.595247 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-6w2cb"] Jan 27 13:22:40 crc kubenswrapper[4786]: I0127 13:22:40.664811 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpjct\" (UniqueName: \"kubernetes.io/projected/c664e24f-ed64-4970-bce5-a5dcffcb6497-kube-api-access-kpjct\") pod \"cert-manager-operator-controller-manager-64cf6dff88-6w2cb\" (UID: \"c664e24f-ed64-4970-bce5-a5dcffcb6497\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-6w2cb" Jan 27 13:22:40 crc kubenswrapper[4786]: I0127 13:22:40.664924 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c664e24f-ed64-4970-bce5-a5dcffcb6497-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-6w2cb\" (UID: \"c664e24f-ed64-4970-bce5-a5dcffcb6497\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-6w2cb" Jan 27 13:22:40 crc kubenswrapper[4786]: I0127 13:22:40.765578 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpjct\" (UniqueName: \"kubernetes.io/projected/c664e24f-ed64-4970-bce5-a5dcffcb6497-kube-api-access-kpjct\") pod \"cert-manager-operator-controller-manager-64cf6dff88-6w2cb\" (UID: \"c664e24f-ed64-4970-bce5-a5dcffcb6497\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-6w2cb" Jan 27 13:22:40 crc kubenswrapper[4786]: I0127 13:22:40.765698 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c664e24f-ed64-4970-bce5-a5dcffcb6497-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-6w2cb\" (UID: \"c664e24f-ed64-4970-bce5-a5dcffcb6497\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-6w2cb" Jan 27 13:22:40 crc kubenswrapper[4786]: I0127 13:22:40.766218 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c664e24f-ed64-4970-bce5-a5dcffcb6497-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-6w2cb\" (UID: \"c664e24f-ed64-4970-bce5-a5dcffcb6497\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-6w2cb" Jan 27 13:22:40 crc kubenswrapper[4786]: I0127 13:22:40.788886 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpjct\" (UniqueName: \"kubernetes.io/projected/c664e24f-ed64-4970-bce5-a5dcffcb6497-kube-api-access-kpjct\") pod \"cert-manager-operator-controller-manager-64cf6dff88-6w2cb\" (UID: \"c664e24f-ed64-4970-bce5-a5dcffcb6497\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-6w2cb" Jan 27 13:22:40 crc kubenswrapper[4786]: I0127 13:22:40.901787 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-6w2cb" Jan 27 13:22:41 crc kubenswrapper[4786]: I0127 13:22:41.223470 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-6w2cb"] Jan 27 13:22:42 crc kubenswrapper[4786]: I0127 13:22:42.108439 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-6w2cb" event={"ID":"c664e24f-ed64-4970-bce5-a5dcffcb6497","Type":"ContainerStarted","Data":"91aaebb96ca6c9ebd86815cff7f60ffaf00e6b97c06e2c4e49b3b284b01862f3"} Jan 27 13:22:42 crc kubenswrapper[4786]: I0127 13:22:42.896404 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxdgz"] Jan 27 13:22:42 crc kubenswrapper[4786]: I0127 13:22:42.896935 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zxdgz" podUID="708f9703-578e-44d0-accc-34807bb16f4a" containerName="registry-server" containerID="cri-o://8d703f3016f0fceac7de9ac73d5d8ec162fb75be5cf13444a48be777d8216f01" gracePeriod=2 Jan 27 13:22:43 crc kubenswrapper[4786]: I0127 13:22:43.746487 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:43 crc kubenswrapper[4786]: I0127 13:22:43.912991 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/708f9703-578e-44d0-accc-34807bb16f4a-catalog-content\") pod \"708f9703-578e-44d0-accc-34807bb16f4a\" (UID: \"708f9703-578e-44d0-accc-34807bb16f4a\") " Jan 27 13:22:43 crc kubenswrapper[4786]: I0127 13:22:43.913066 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/708f9703-578e-44d0-accc-34807bb16f4a-utilities\") pod \"708f9703-578e-44d0-accc-34807bb16f4a\" (UID: \"708f9703-578e-44d0-accc-34807bb16f4a\") " Jan 27 13:22:43 crc kubenswrapper[4786]: I0127 13:22:43.913131 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clvq4\" (UniqueName: \"kubernetes.io/projected/708f9703-578e-44d0-accc-34807bb16f4a-kube-api-access-clvq4\") pod \"708f9703-578e-44d0-accc-34807bb16f4a\" (UID: \"708f9703-578e-44d0-accc-34807bb16f4a\") " Jan 27 13:22:43 crc kubenswrapper[4786]: I0127 13:22:43.914360 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/708f9703-578e-44d0-accc-34807bb16f4a-utilities" (OuterVolumeSpecName: "utilities") pod "708f9703-578e-44d0-accc-34807bb16f4a" (UID: "708f9703-578e-44d0-accc-34807bb16f4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:22:43 crc kubenswrapper[4786]: I0127 13:22:43.922882 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/708f9703-578e-44d0-accc-34807bb16f4a-kube-api-access-clvq4" (OuterVolumeSpecName: "kube-api-access-clvq4") pod "708f9703-578e-44d0-accc-34807bb16f4a" (UID: "708f9703-578e-44d0-accc-34807bb16f4a"). InnerVolumeSpecName "kube-api-access-clvq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:22:43 crc kubenswrapper[4786]: I0127 13:22:43.957957 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/708f9703-578e-44d0-accc-34807bb16f4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "708f9703-578e-44d0-accc-34807bb16f4a" (UID: "708f9703-578e-44d0-accc-34807bb16f4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.014303 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/708f9703-578e-44d0-accc-34807bb16f4a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.014339 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/708f9703-578e-44d0-accc-34807bb16f4a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.014349 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clvq4\" (UniqueName: \"kubernetes.io/projected/708f9703-578e-44d0-accc-34807bb16f4a-kube-api-access-clvq4\") on node \"crc\" DevicePath \"\"" Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.121406 4786 generic.go:334] "Generic (PLEG): container finished" podID="708f9703-578e-44d0-accc-34807bb16f4a" containerID="8d703f3016f0fceac7de9ac73d5d8ec162fb75be5cf13444a48be777d8216f01" exitCode=0 Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.121452 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxdgz" event={"ID":"708f9703-578e-44d0-accc-34807bb16f4a","Type":"ContainerDied","Data":"8d703f3016f0fceac7de9ac73d5d8ec162fb75be5cf13444a48be777d8216f01"} Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.121481 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxdgz" Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.121499 4786 scope.go:117] "RemoveContainer" containerID="8d703f3016f0fceac7de9ac73d5d8ec162fb75be5cf13444a48be777d8216f01" Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.121488 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxdgz" event={"ID":"708f9703-578e-44d0-accc-34807bb16f4a","Type":"ContainerDied","Data":"702463baa9e78abfb66c00a67fca58d3d144d88737ac9b04b93533380fa93931"} Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.136198 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.136432 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.152257 4786 scope.go:117] "RemoveContainer" containerID="42fcc2129e4358a529ca66edb9cef1f8cc3d3de1e77a862e19458b5a36d8cc64" Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.155689 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxdgz"] Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.159696 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zxdgz"] Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.186142 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.189969 4786 scope.go:117] "RemoveContainer" containerID="e7ab5f95f02551079ef6ae5331d71524c7cf4308425e2db8ebd243469bb068c1" Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.210627 4786 scope.go:117] "RemoveContainer" containerID="8d703f3016f0fceac7de9ac73d5d8ec162fb75be5cf13444a48be777d8216f01" Jan 27 13:22:44 crc kubenswrapper[4786]: E0127 13:22:44.211022 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d703f3016f0fceac7de9ac73d5d8ec162fb75be5cf13444a48be777d8216f01\": container with ID starting with 8d703f3016f0fceac7de9ac73d5d8ec162fb75be5cf13444a48be777d8216f01 not found: ID does not exist" containerID="8d703f3016f0fceac7de9ac73d5d8ec162fb75be5cf13444a48be777d8216f01" Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.211121 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d703f3016f0fceac7de9ac73d5d8ec162fb75be5cf13444a48be777d8216f01"} err="failed to get container status \"8d703f3016f0fceac7de9ac73d5d8ec162fb75be5cf13444a48be777d8216f01\": rpc error: code = NotFound desc = could not find container \"8d703f3016f0fceac7de9ac73d5d8ec162fb75be5cf13444a48be777d8216f01\": container with ID starting with 8d703f3016f0fceac7de9ac73d5d8ec162fb75be5cf13444a48be777d8216f01 not found: ID does not exist" Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.211154 4786 scope.go:117] "RemoveContainer" containerID="42fcc2129e4358a529ca66edb9cef1f8cc3d3de1e77a862e19458b5a36d8cc64" Jan 27 13:22:44 crc kubenswrapper[4786]: E0127 13:22:44.211453 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42fcc2129e4358a529ca66edb9cef1f8cc3d3de1e77a862e19458b5a36d8cc64\": container with ID starting with 42fcc2129e4358a529ca66edb9cef1f8cc3d3de1e77a862e19458b5a36d8cc64 not found: ID does not exist" containerID="42fcc2129e4358a529ca66edb9cef1f8cc3d3de1e77a862e19458b5a36d8cc64" Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.211478 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42fcc2129e4358a529ca66edb9cef1f8cc3d3de1e77a862e19458b5a36d8cc64"} err="failed to get container status \"42fcc2129e4358a529ca66edb9cef1f8cc3d3de1e77a862e19458b5a36d8cc64\": rpc error: code = NotFound desc = could not find container \"42fcc2129e4358a529ca66edb9cef1f8cc3d3de1e77a862e19458b5a36d8cc64\": container with ID starting with 42fcc2129e4358a529ca66edb9cef1f8cc3d3de1e77a862e19458b5a36d8cc64 not found: ID does not exist" Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.211495 4786 scope.go:117] "RemoveContainer" containerID="e7ab5f95f02551079ef6ae5331d71524c7cf4308425e2db8ebd243469bb068c1" Jan 27 13:22:44 crc kubenswrapper[4786]: E0127 13:22:44.211966 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7ab5f95f02551079ef6ae5331d71524c7cf4308425e2db8ebd243469bb068c1\": container with ID starting with e7ab5f95f02551079ef6ae5331d71524c7cf4308425e2db8ebd243469bb068c1 not found: ID does not exist" containerID="e7ab5f95f02551079ef6ae5331d71524c7cf4308425e2db8ebd243469bb068c1" Jan 27 13:22:44 crc kubenswrapper[4786]: I0127 13:22:44.211998 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7ab5f95f02551079ef6ae5331d71524c7cf4308425e2db8ebd243469bb068c1"} err="failed to get container status \"e7ab5f95f02551079ef6ae5331d71524c7cf4308425e2db8ebd243469bb068c1\": rpc error: code = NotFound desc = could not find container \"e7ab5f95f02551079ef6ae5331d71524c7cf4308425e2db8ebd243469bb068c1\": container with ID starting with e7ab5f95f02551079ef6ae5331d71524c7cf4308425e2db8ebd243469bb068c1 not found: ID does not exist" Jan 27 13:22:45 crc kubenswrapper[4786]: I0127 13:22:45.172870 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:45 crc kubenswrapper[4786]: I0127 13:22:45.474109 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="708f9703-578e-44d0-accc-34807bb16f4a" path="/var/lib/kubelet/pods/708f9703-578e-44d0-accc-34807bb16f4a/volumes" Jan 27 13:22:46 crc kubenswrapper[4786]: I0127 13:22:46.805375 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8jp9"] Jan 27 13:22:48 crc kubenswrapper[4786]: I0127 13:22:48.147781 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q8jp9" podUID="32f7bf77-376b-4889-a2a1-c7d0b8b5c93b" containerName="registry-server" containerID="cri-o://d2bb8ec41f12a1ce7fd708501c5367ebed1278e0a1fa289bbe3827de0b10208f" gracePeriod=2 Jan 27 13:22:49 crc kubenswrapper[4786]: I0127 13:22:49.155257 4786 generic.go:334] "Generic (PLEG): container finished" podID="32f7bf77-376b-4889-a2a1-c7d0b8b5c93b" containerID="d2bb8ec41f12a1ce7fd708501c5367ebed1278e0a1fa289bbe3827de0b10208f" exitCode=0 Jan 27 13:22:49 crc kubenswrapper[4786]: I0127 13:22:49.155321 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8jp9" event={"ID":"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b","Type":"ContainerDied","Data":"d2bb8ec41f12a1ce7fd708501c5367ebed1278e0a1fa289bbe3827de0b10208f"} Jan 27 13:22:49 crc kubenswrapper[4786]: I0127 13:22:49.715026 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:49 crc kubenswrapper[4786]: I0127 13:22:49.793202 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-utilities\") pod \"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b\" (UID: \"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b\") " Jan 27 13:22:49 crc kubenswrapper[4786]: I0127 13:22:49.793260 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-catalog-content\") pod \"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b\" (UID: \"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b\") " Jan 27 13:22:49 crc kubenswrapper[4786]: I0127 13:22:49.793319 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwgbr\" (UniqueName: \"kubernetes.io/projected/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-kube-api-access-nwgbr\") pod \"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b\" (UID: \"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b\") " Jan 27 13:22:49 crc kubenswrapper[4786]: I0127 13:22:49.794134 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-utilities" (OuterVolumeSpecName: "utilities") pod "32f7bf77-376b-4889-a2a1-c7d0b8b5c93b" (UID: "32f7bf77-376b-4889-a2a1-c7d0b8b5c93b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:22:49 crc kubenswrapper[4786]: I0127 13:22:49.798291 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-kube-api-access-nwgbr" (OuterVolumeSpecName: "kube-api-access-nwgbr") pod "32f7bf77-376b-4889-a2a1-c7d0b8b5c93b" (UID: "32f7bf77-376b-4889-a2a1-c7d0b8b5c93b"). InnerVolumeSpecName "kube-api-access-nwgbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:22:49 crc kubenswrapper[4786]: I0127 13:22:49.818828 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32f7bf77-376b-4889-a2a1-c7d0b8b5c93b" (UID: "32f7bf77-376b-4889-a2a1-c7d0b8b5c93b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:22:49 crc kubenswrapper[4786]: I0127 13:22:49.894705 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:22:49 crc kubenswrapper[4786]: I0127 13:22:49.894734 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:22:49 crc kubenswrapper[4786]: I0127 13:22:49.894743 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwgbr\" (UniqueName: \"kubernetes.io/projected/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b-kube-api-access-nwgbr\") on node \"crc\" DevicePath \"\"" Jan 27 13:22:50 crc kubenswrapper[4786]: I0127 13:22:50.163230 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8jp9" event={"ID":"32f7bf77-376b-4889-a2a1-c7d0b8b5c93b","Type":"ContainerDied","Data":"f37b7ce73e4ef4f4231c8122b809393022cf30afb7079cec4799409da16e2868"} Jan 27 13:22:50 crc kubenswrapper[4786]: I0127 13:22:50.163276 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8jp9" Jan 27 13:22:50 crc kubenswrapper[4786]: I0127 13:22:50.163287 4786 scope.go:117] "RemoveContainer" containerID="d2bb8ec41f12a1ce7fd708501c5367ebed1278e0a1fa289bbe3827de0b10208f" Jan 27 13:22:50 crc kubenswrapper[4786]: I0127 13:22:50.178305 4786 scope.go:117] "RemoveContainer" containerID="321f255bf3183d261d2af7d3240668e228ec824bbde2432e60b9dabf912e5174" Jan 27 13:22:50 crc kubenswrapper[4786]: I0127 13:22:50.189585 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8jp9"] Jan 27 13:22:50 crc kubenswrapper[4786]: I0127 13:22:50.198408 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8jp9"] Jan 27 13:22:50 crc kubenswrapper[4786]: I0127 13:22:50.198674 4786 scope.go:117] "RemoveContainer" containerID="ab9d7c52ee458bee1b411205f2437dd7efe8b230c14c4437cb100a420367f646" Jan 27 13:22:51 crc kubenswrapper[4786]: I0127 13:22:51.472431 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32f7bf77-376b-4889-a2a1-c7d0b8b5c93b" path="/var/lib/kubelet/pods/32f7bf77-376b-4889-a2a1-c7d0b8b5c93b/volumes" Jan 27 13:22:52 crc kubenswrapper[4786]: I0127 13:22:52.175992 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-6w2cb" event={"ID":"c664e24f-ed64-4970-bce5-a5dcffcb6497","Type":"ContainerStarted","Data":"4e8a0c7c98ddf6689d11b6e00182526aecb56f30eddce4fffcc7b4b82dd007a0"} Jan 27 13:22:52 crc kubenswrapper[4786]: I0127 13:22:52.195836 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-6w2cb" podStartSLOduration=1.75900691 podStartE2EDuration="12.195818517s" podCreationTimestamp="2026-01-27 13:22:40 +0000 UTC" firstStartedPulling="2026-01-27 13:22:41.227625131 +0000 UTC m=+944.438239250" lastFinishedPulling="2026-01-27 13:22:51.664436738 +0000 UTC m=+954.875050857" observedRunningTime="2026-01-27 13:22:52.191788317 +0000 UTC m=+955.402402446" watchObservedRunningTime="2026-01-27 13:22:52.195818517 +0000 UTC m=+955.406432656" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.445853 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-ddztq"] Jan 27 13:22:54 crc kubenswrapper[4786]: E0127 13:22:54.446689 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708f9703-578e-44d0-accc-34807bb16f4a" containerName="registry-server" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.446715 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="708f9703-578e-44d0-accc-34807bb16f4a" containerName="registry-server" Jan 27 13:22:54 crc kubenswrapper[4786]: E0127 13:22:54.446727 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708f9703-578e-44d0-accc-34807bb16f4a" containerName="extract-utilities" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.446735 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="708f9703-578e-44d0-accc-34807bb16f4a" containerName="extract-utilities" Jan 27 13:22:54 crc kubenswrapper[4786]: E0127 13:22:54.446757 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708f9703-578e-44d0-accc-34807bb16f4a" containerName="extract-content" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.446767 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="708f9703-578e-44d0-accc-34807bb16f4a" containerName="extract-content" Jan 27 13:22:54 crc kubenswrapper[4786]: E0127 13:22:54.446775 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f7bf77-376b-4889-a2a1-c7d0b8b5c93b" containerName="extract-utilities" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.446782 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f7bf77-376b-4889-a2a1-c7d0b8b5c93b" containerName="extract-utilities" Jan 27 13:22:54 crc kubenswrapper[4786]: E0127 13:22:54.446807 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f7bf77-376b-4889-a2a1-c7d0b8b5c93b" containerName="extract-content" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.446814 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f7bf77-376b-4889-a2a1-c7d0b8b5c93b" containerName="extract-content" Jan 27 13:22:54 crc kubenswrapper[4786]: E0127 13:22:54.446847 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f7bf77-376b-4889-a2a1-c7d0b8b5c93b" containerName="registry-server" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.446853 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f7bf77-376b-4889-a2a1-c7d0b8b5c93b" containerName="registry-server" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.447090 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="32f7bf77-376b-4889-a2a1-c7d0b8b5c93b" containerName="registry-server" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.447103 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="708f9703-578e-44d0-accc-34807bb16f4a" containerName="registry-server" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.448048 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-ddztq" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.453555 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.454113 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.468627 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-ddztq"] Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.552804 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e346953b-9953-4381-8ec7-72958174f6d3-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-ddztq\" (UID: \"e346953b-9953-4381-8ec7-72958174f6d3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-ddztq" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.552909 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhmgr\" (UniqueName: \"kubernetes.io/projected/e346953b-9953-4381-8ec7-72958174f6d3-kube-api-access-fhmgr\") pod \"cert-manager-webhook-f4fb5df64-ddztq\" (UID: \"e346953b-9953-4381-8ec7-72958174f6d3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-ddztq" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.654020 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e346953b-9953-4381-8ec7-72958174f6d3-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-ddztq\" (UID: \"e346953b-9953-4381-8ec7-72958174f6d3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-ddztq" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.654329 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhmgr\" (UniqueName: \"kubernetes.io/projected/e346953b-9953-4381-8ec7-72958174f6d3-kube-api-access-fhmgr\") pod \"cert-manager-webhook-f4fb5df64-ddztq\" (UID: \"e346953b-9953-4381-8ec7-72958174f6d3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-ddztq" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.684351 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhmgr\" (UniqueName: \"kubernetes.io/projected/e346953b-9953-4381-8ec7-72958174f6d3-kube-api-access-fhmgr\") pod \"cert-manager-webhook-f4fb5df64-ddztq\" (UID: \"e346953b-9953-4381-8ec7-72958174f6d3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-ddztq" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.685181 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e346953b-9953-4381-8ec7-72958174f6d3-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-ddztq\" (UID: \"e346953b-9953-4381-8ec7-72958174f6d3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-ddztq" Jan 27 13:22:54 crc kubenswrapper[4786]: I0127 13:22:54.772415 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-ddztq" Jan 27 13:22:55 crc kubenswrapper[4786]: I0127 13:22:55.215068 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-ddztq"] Jan 27 13:22:55 crc kubenswrapper[4786]: W0127 13:22:55.226164 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode346953b_9953_4381_8ec7_72958174f6d3.slice/crio-57c94dbb5e0c400afe7e9d7673db82738c832fbf77ada9fedfc64dedfb4b8b71 WatchSource:0}: Error finding container 57c94dbb5e0c400afe7e9d7673db82738c832fbf77ada9fedfc64dedfb4b8b71: Status 404 returned error can't find the container with id 57c94dbb5e0c400afe7e9d7673db82738c832fbf77ada9fedfc64dedfb4b8b71 Jan 27 13:22:56 crc kubenswrapper[4786]: I0127 13:22:56.201710 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-ddztq" event={"ID":"e346953b-9953-4381-8ec7-72958174f6d3","Type":"ContainerStarted","Data":"57c94dbb5e0c400afe7e9d7673db82738c832fbf77ada9fedfc64dedfb4b8b71"} Jan 27 13:22:58 crc kubenswrapper[4786]: I0127 13:22:58.181162 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-x6g5c"] Jan 27 13:22:58 crc kubenswrapper[4786]: I0127 13:22:58.182913 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-x6g5c" Jan 27 13:22:58 crc kubenswrapper[4786]: I0127 13:22:58.185041 4786 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vggwm" Jan 27 13:22:58 crc kubenswrapper[4786]: I0127 13:22:58.188619 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-x6g5c"] Jan 27 13:22:58 crc kubenswrapper[4786]: I0127 13:22:58.323997 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc6lt\" (UniqueName: \"kubernetes.io/projected/1b1e34b6-0b50-4461-821c-64f3cafd6d69-kube-api-access-bc6lt\") pod \"cert-manager-cainjector-855d9ccff4-x6g5c\" (UID: \"1b1e34b6-0b50-4461-821c-64f3cafd6d69\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-x6g5c" Jan 27 13:22:58 crc kubenswrapper[4786]: I0127 13:22:58.324049 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b1e34b6-0b50-4461-821c-64f3cafd6d69-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-x6g5c\" (UID: \"1b1e34b6-0b50-4461-821c-64f3cafd6d69\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-x6g5c" Jan 27 13:22:58 crc kubenswrapper[4786]: I0127 13:22:58.425120 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc6lt\" (UniqueName: \"kubernetes.io/projected/1b1e34b6-0b50-4461-821c-64f3cafd6d69-kube-api-access-bc6lt\") pod \"cert-manager-cainjector-855d9ccff4-x6g5c\" (UID: \"1b1e34b6-0b50-4461-821c-64f3cafd6d69\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-x6g5c" Jan 27 13:22:58 crc kubenswrapper[4786]: I0127 13:22:58.425178 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b1e34b6-0b50-4461-821c-64f3cafd6d69-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-x6g5c\" (UID: \"1b1e34b6-0b50-4461-821c-64f3cafd6d69\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-x6g5c" Jan 27 13:22:58 crc kubenswrapper[4786]: I0127 13:22:58.459395 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b1e34b6-0b50-4461-821c-64f3cafd6d69-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-x6g5c\" (UID: \"1b1e34b6-0b50-4461-821c-64f3cafd6d69\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-x6g5c" Jan 27 13:22:58 crc kubenswrapper[4786]: I0127 13:22:58.464502 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc6lt\" (UniqueName: \"kubernetes.io/projected/1b1e34b6-0b50-4461-821c-64f3cafd6d69-kube-api-access-bc6lt\") pod \"cert-manager-cainjector-855d9ccff4-x6g5c\" (UID: \"1b1e34b6-0b50-4461-821c-64f3cafd6d69\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-x6g5c" Jan 27 13:22:58 crc kubenswrapper[4786]: I0127 13:22:58.521828 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-x6g5c" Jan 27 13:22:58 crc kubenswrapper[4786]: I0127 13:22:58.975574 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-x6g5c"] Jan 27 13:23:04 crc kubenswrapper[4786]: W0127 13:23:04.440881 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b1e34b6_0b50_4461_821c_64f3cafd6d69.slice/crio-c1bb9958c7708a389d9814253985da419b49ecbb9585dd1456237b54f52eb903 WatchSource:0}: Error finding container c1bb9958c7708a389d9814253985da419b49ecbb9585dd1456237b54f52eb903: Status 404 returned error can't find the container with id c1bb9958c7708a389d9814253985da419b49ecbb9585dd1456237b54f52eb903 Jan 27 13:23:05 crc kubenswrapper[4786]: I0127 13:23:05.271887 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-x6g5c" event={"ID":"1b1e34b6-0b50-4461-821c-64f3cafd6d69","Type":"ContainerStarted","Data":"c1bb9958c7708a389d9814253985da419b49ecbb9585dd1456237b54f52eb903"} Jan 27 13:23:05 crc kubenswrapper[4786]: I0127 13:23:05.332182 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-nswrf"] Jan 27 13:23:05 crc kubenswrapper[4786]: I0127 13:23:05.333020 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-nswrf" Jan 27 13:23:05 crc kubenswrapper[4786]: I0127 13:23:05.335151 4786 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-l9lkq" Jan 27 13:23:05 crc kubenswrapper[4786]: I0127 13:23:05.342374 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-nswrf"] Jan 27 13:23:05 crc kubenswrapper[4786]: I0127 13:23:05.449683 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1729a70-d007-4211-8c66-58d58ada9764-bound-sa-token\") pod \"cert-manager-86cb77c54b-nswrf\" (UID: \"e1729a70-d007-4211-8c66-58d58ada9764\") " pod="cert-manager/cert-manager-86cb77c54b-nswrf" Jan 27 13:23:05 crc kubenswrapper[4786]: I0127 13:23:05.449872 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdrhs\" (UniqueName: \"kubernetes.io/projected/e1729a70-d007-4211-8c66-58d58ada9764-kube-api-access-mdrhs\") pod \"cert-manager-86cb77c54b-nswrf\" (UID: \"e1729a70-d007-4211-8c66-58d58ada9764\") " pod="cert-manager/cert-manager-86cb77c54b-nswrf" Jan 27 13:23:05 crc kubenswrapper[4786]: I0127 13:23:05.551328 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdrhs\" (UniqueName: \"kubernetes.io/projected/e1729a70-d007-4211-8c66-58d58ada9764-kube-api-access-mdrhs\") pod \"cert-manager-86cb77c54b-nswrf\" (UID: \"e1729a70-d007-4211-8c66-58d58ada9764\") " pod="cert-manager/cert-manager-86cb77c54b-nswrf" Jan 27 13:23:05 crc kubenswrapper[4786]: I0127 13:23:05.551426 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1729a70-d007-4211-8c66-58d58ada9764-bound-sa-token\") pod \"cert-manager-86cb77c54b-nswrf\" (UID: \"e1729a70-d007-4211-8c66-58d58ada9764\") " pod="cert-manager/cert-manager-86cb77c54b-nswrf" Jan 27 13:23:05 crc kubenswrapper[4786]: I0127 13:23:05.575744 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1729a70-d007-4211-8c66-58d58ada9764-bound-sa-token\") pod \"cert-manager-86cb77c54b-nswrf\" (UID: \"e1729a70-d007-4211-8c66-58d58ada9764\") " pod="cert-manager/cert-manager-86cb77c54b-nswrf" Jan 27 13:23:05 crc kubenswrapper[4786]: I0127 13:23:05.581355 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdrhs\" (UniqueName: \"kubernetes.io/projected/e1729a70-d007-4211-8c66-58d58ada9764-kube-api-access-mdrhs\") pod \"cert-manager-86cb77c54b-nswrf\" (UID: \"e1729a70-d007-4211-8c66-58d58ada9764\") " pod="cert-manager/cert-manager-86cb77c54b-nswrf" Jan 27 13:23:05 crc kubenswrapper[4786]: I0127 13:23:05.649550 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-nswrf" Jan 27 13:23:06 crc kubenswrapper[4786]: I0127 13:23:06.091023 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-nswrf"] Jan 27 13:23:06 crc kubenswrapper[4786]: W0127 13:23:06.098001 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1729a70_d007_4211_8c66_58d58ada9764.slice/crio-319ee77978b9f496f8c2b76d2cfac65ee2ae079db787b25840a9d6f9c2858c6d WatchSource:0}: Error finding container 319ee77978b9f496f8c2b76d2cfac65ee2ae079db787b25840a9d6f9c2858c6d: Status 404 returned error can't find the container with id 319ee77978b9f496f8c2b76d2cfac65ee2ae079db787b25840a9d6f9c2858c6d Jan 27 13:23:06 crc kubenswrapper[4786]: I0127 13:23:06.280246 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-ddztq" event={"ID":"e346953b-9953-4381-8ec7-72958174f6d3","Type":"ContainerStarted","Data":"4e2f8b8b63f71224daf90815ed41794f5d34bc33ad2bb8d27b2f594327ceb2f9"} Jan 27 13:23:06 crc kubenswrapper[4786]: I0127 13:23:06.280331 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-ddztq" Jan 27 13:23:06 crc kubenswrapper[4786]: I0127 13:23:06.281474 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-nswrf" event={"ID":"e1729a70-d007-4211-8c66-58d58ada9764","Type":"ContainerStarted","Data":"319ee77978b9f496f8c2b76d2cfac65ee2ae079db787b25840a9d6f9c2858c6d"} Jan 27 13:23:06 crc kubenswrapper[4786]: I0127 13:23:06.298733 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-ddztq" podStartSLOduration=1.8071549930000002 podStartE2EDuration="12.298708906s" podCreationTimestamp="2026-01-27 13:22:54 +0000 UTC" firstStartedPulling="2026-01-27 13:22:55.2287228 +0000 UTC m=+958.439336919" lastFinishedPulling="2026-01-27 13:23:05.720276713 +0000 UTC m=+968.930890832" observedRunningTime="2026-01-27 13:23:06.295872389 +0000 UTC m=+969.506486508" watchObservedRunningTime="2026-01-27 13:23:06.298708906 +0000 UTC m=+969.509323025" Jan 27 13:23:07 crc kubenswrapper[4786]: I0127 13:23:07.293650 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-x6g5c" event={"ID":"1b1e34b6-0b50-4461-821c-64f3cafd6d69","Type":"ContainerStarted","Data":"ad33e59d02a93438ab39ea77452c47bb48f0e6a8df46de8f5191d373f259e8fb"} Jan 27 13:23:07 crc kubenswrapper[4786]: I0127 13:23:07.296875 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-nswrf" event={"ID":"e1729a70-d007-4211-8c66-58d58ada9764","Type":"ContainerStarted","Data":"9072640ddbd618c1a5a19b59c4f8a3970260488bb5bd59db9ecd2cec01724f44"} Jan 27 13:23:07 crc kubenswrapper[4786]: I0127 13:23:07.309512 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-x6g5c" podStartSLOduration=7.461277016 podStartE2EDuration="9.309496312s" podCreationTimestamp="2026-01-27 13:22:58 +0000 UTC" firstStartedPulling="2026-01-27 13:23:04.443536481 +0000 UTC m=+967.654150600" lastFinishedPulling="2026-01-27 13:23:06.291755777 +0000 UTC m=+969.502369896" observedRunningTime="2026-01-27 13:23:07.307016864 +0000 UTC m=+970.517630983" watchObservedRunningTime="2026-01-27 13:23:07.309496312 +0000 UTC m=+970.520110431" Jan 27 13:23:07 crc kubenswrapper[4786]: I0127 13:23:07.334946 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-nswrf" podStartSLOduration=2.334928697 podStartE2EDuration="2.334928697s" podCreationTimestamp="2026-01-27 13:23:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:23:07.334078684 +0000 UTC m=+970.544692823" watchObservedRunningTime="2026-01-27 13:23:07.334928697 +0000 UTC m=+970.545542816" Jan 27 13:23:09 crc kubenswrapper[4786]: I0127 13:23:09.532433 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:23:09 crc kubenswrapper[4786]: I0127 13:23:09.532787 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:23:14 crc kubenswrapper[4786]: I0127 13:23:14.776091 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-ddztq" Jan 27 13:23:17 crc kubenswrapper[4786]: I0127 13:23:17.872977 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-n7l9z"] Jan 27 13:23:17 crc kubenswrapper[4786]: I0127 13:23:17.873739 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n7l9z" Jan 27 13:23:17 crc kubenswrapper[4786]: I0127 13:23:17.876008 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 13:23:17 crc kubenswrapper[4786]: I0127 13:23:17.876032 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-wg2cd" Jan 27 13:23:17 crc kubenswrapper[4786]: I0127 13:23:17.876901 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 13:23:17 crc kubenswrapper[4786]: I0127 13:23:17.893690 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n7l9z"] Jan 27 13:23:18 crc kubenswrapper[4786]: I0127 13:23:18.004951 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26ckx\" (UniqueName: \"kubernetes.io/projected/4875219b-aaa4-45f8-a5a8-7930dbd8894b-kube-api-access-26ckx\") pod \"openstack-operator-index-n7l9z\" (UID: \"4875219b-aaa4-45f8-a5a8-7930dbd8894b\") " pod="openstack-operators/openstack-operator-index-n7l9z" Jan 27 13:23:18 crc kubenswrapper[4786]: I0127 13:23:18.105856 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26ckx\" (UniqueName: \"kubernetes.io/projected/4875219b-aaa4-45f8-a5a8-7930dbd8894b-kube-api-access-26ckx\") pod \"openstack-operator-index-n7l9z\" (UID: \"4875219b-aaa4-45f8-a5a8-7930dbd8894b\") " pod="openstack-operators/openstack-operator-index-n7l9z" Jan 27 13:23:18 crc kubenswrapper[4786]: I0127 13:23:18.125853 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26ckx\" (UniqueName: \"kubernetes.io/projected/4875219b-aaa4-45f8-a5a8-7930dbd8894b-kube-api-access-26ckx\") pod \"openstack-operator-index-n7l9z\" (UID: \"4875219b-aaa4-45f8-a5a8-7930dbd8894b\") " pod="openstack-operators/openstack-operator-index-n7l9z" Jan 27 13:23:18 crc kubenswrapper[4786]: I0127 13:23:18.199084 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n7l9z" Jan 27 13:23:21 crc kubenswrapper[4786]: I0127 13:23:21.253739 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-n7l9z"] Jan 27 13:23:21 crc kubenswrapper[4786]: I0127 13:23:21.767254 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-n7l9z"] Jan 27 13:23:21 crc kubenswrapper[4786]: W0127 13:23:21.773017 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4875219b_aaa4_45f8_a5a8_7930dbd8894b.slice/crio-4ba5a2082f1201abc4bdc326ad6c043cf33a2d9ece047008695f61ea56334481 WatchSource:0}: Error finding container 4ba5a2082f1201abc4bdc326ad6c043cf33a2d9ece047008695f61ea56334481: Status 404 returned error can't find the container with id 4ba5a2082f1201abc4bdc326ad6c043cf33a2d9ece047008695f61ea56334481 Jan 27 13:23:21 crc kubenswrapper[4786]: I0127 13:23:21.860049 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7mc6w"] Jan 27 13:23:21 crc kubenswrapper[4786]: I0127 13:23:21.862214 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7mc6w" Jan 27 13:23:21 crc kubenswrapper[4786]: I0127 13:23:21.869596 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7mc6w"] Jan 27 13:23:21 crc kubenswrapper[4786]: I0127 13:23:21.957229 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5vr9\" (UniqueName: \"kubernetes.io/projected/e525b58f-7304-40e1-9fdc-949f43bb2cba-kube-api-access-j5vr9\") pod \"openstack-operator-index-7mc6w\" (UID: \"e525b58f-7304-40e1-9fdc-949f43bb2cba\") " pod="openstack-operators/openstack-operator-index-7mc6w" Jan 27 13:23:22 crc kubenswrapper[4786]: I0127 13:23:22.058865 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5vr9\" (UniqueName: \"kubernetes.io/projected/e525b58f-7304-40e1-9fdc-949f43bb2cba-kube-api-access-j5vr9\") pod \"openstack-operator-index-7mc6w\" (UID: \"e525b58f-7304-40e1-9fdc-949f43bb2cba\") " pod="openstack-operators/openstack-operator-index-7mc6w" Jan 27 13:23:22 crc kubenswrapper[4786]: I0127 13:23:22.077019 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5vr9\" (UniqueName: \"kubernetes.io/projected/e525b58f-7304-40e1-9fdc-949f43bb2cba-kube-api-access-j5vr9\") pod \"openstack-operator-index-7mc6w\" (UID: \"e525b58f-7304-40e1-9fdc-949f43bb2cba\") " pod="openstack-operators/openstack-operator-index-7mc6w" Jan 27 13:23:22 crc kubenswrapper[4786]: I0127 13:23:22.184413 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7mc6w" Jan 27 13:23:22 crc kubenswrapper[4786]: I0127 13:23:22.380433 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n7l9z" event={"ID":"4875219b-aaa4-45f8-a5a8-7930dbd8894b","Type":"ContainerStarted","Data":"4ba5a2082f1201abc4bdc326ad6c043cf33a2d9ece047008695f61ea56334481"} Jan 27 13:23:22 crc kubenswrapper[4786]: I0127 13:23:22.441356 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7mc6w"] Jan 27 13:23:22 crc kubenswrapper[4786]: W0127 13:23:22.447189 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode525b58f_7304_40e1_9fdc_949f43bb2cba.slice/crio-41cdf42f78219eba86a7604b6ef836002d7038c1ff8a5ec2da42d1feeb3fe799 WatchSource:0}: Error finding container 41cdf42f78219eba86a7604b6ef836002d7038c1ff8a5ec2da42d1feeb3fe799: Status 404 returned error can't find the container with id 41cdf42f78219eba86a7604b6ef836002d7038c1ff8a5ec2da42d1feeb3fe799 Jan 27 13:23:23 crc kubenswrapper[4786]: I0127 13:23:23.388367 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7mc6w" event={"ID":"e525b58f-7304-40e1-9fdc-949f43bb2cba","Type":"ContainerStarted","Data":"41cdf42f78219eba86a7604b6ef836002d7038c1ff8a5ec2da42d1feeb3fe799"} Jan 27 13:23:29 crc kubenswrapper[4786]: I0127 13:23:29.427778 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n7l9z" event={"ID":"4875219b-aaa4-45f8-a5a8-7930dbd8894b","Type":"ContainerStarted","Data":"7c5f4fb442f3dd7f245974451e233e20fbe3b0235ade5971db67c9a87add07f2"} Jan 27 13:23:30 crc kubenswrapper[4786]: I0127 13:23:30.433638 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7mc6w" event={"ID":"e525b58f-7304-40e1-9fdc-949f43bb2cba","Type":"ContainerStarted","Data":"d1b472e08c52c88644b1e768b29093fb4ac8a08c83cc6cba8f333eb507dbe928"} Jan 27 13:23:30 crc kubenswrapper[4786]: I0127 13:23:30.433715 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-n7l9z" podUID="4875219b-aaa4-45f8-a5a8-7930dbd8894b" containerName="registry-server" containerID="cri-o://7c5f4fb442f3dd7f245974451e233e20fbe3b0235ade5971db67c9a87add07f2" gracePeriod=2 Jan 27 13:23:30 crc kubenswrapper[4786]: I0127 13:23:30.455335 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-n7l9z" podStartSLOduration=6.043984784 podStartE2EDuration="13.455308949s" podCreationTimestamp="2026-01-27 13:23:17 +0000 UTC" firstStartedPulling="2026-01-27 13:23:21.775234074 +0000 UTC m=+984.985848193" lastFinishedPulling="2026-01-27 13:23:29.186558239 +0000 UTC m=+992.397172358" observedRunningTime="2026-01-27 13:23:30.451979327 +0000 UTC m=+993.662593456" watchObservedRunningTime="2026-01-27 13:23:30.455308949 +0000 UTC m=+993.665923068" Jan 27 13:23:30 crc kubenswrapper[4786]: I0127 13:23:30.469212 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7mc6w" podStartSLOduration=2.3252117119999998 podStartE2EDuration="9.469197891s" podCreationTimestamp="2026-01-27 13:23:21 +0000 UTC" firstStartedPulling="2026-01-27 13:23:22.454330709 +0000 UTC m=+985.664944828" lastFinishedPulling="2026-01-27 13:23:29.598316888 +0000 UTC m=+992.808931007" observedRunningTime="2026-01-27 13:23:30.468564924 +0000 UTC m=+993.679179043" watchObservedRunningTime="2026-01-27 13:23:30.469197891 +0000 UTC m=+993.679812010" Jan 27 13:23:31 crc kubenswrapper[4786]: I0127 13:23:31.379886 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n7l9z" Jan 27 13:23:31 crc kubenswrapper[4786]: I0127 13:23:31.440402 4786 generic.go:334] "Generic (PLEG): container finished" podID="4875219b-aaa4-45f8-a5a8-7930dbd8894b" containerID="7c5f4fb442f3dd7f245974451e233e20fbe3b0235ade5971db67c9a87add07f2" exitCode=0 Jan 27 13:23:31 crc kubenswrapper[4786]: I0127 13:23:31.440461 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n7l9z" Jan 27 13:23:31 crc kubenswrapper[4786]: I0127 13:23:31.440471 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n7l9z" event={"ID":"4875219b-aaa4-45f8-a5a8-7930dbd8894b","Type":"ContainerDied","Data":"7c5f4fb442f3dd7f245974451e233e20fbe3b0235ade5971db67c9a87add07f2"} Jan 27 13:23:31 crc kubenswrapper[4786]: I0127 13:23:31.440524 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n7l9z" event={"ID":"4875219b-aaa4-45f8-a5a8-7930dbd8894b","Type":"ContainerDied","Data":"4ba5a2082f1201abc4bdc326ad6c043cf33a2d9ece047008695f61ea56334481"} Jan 27 13:23:31 crc kubenswrapper[4786]: I0127 13:23:31.440545 4786 scope.go:117] "RemoveContainer" containerID="7c5f4fb442f3dd7f245974451e233e20fbe3b0235ade5971db67c9a87add07f2" Jan 27 13:23:31 crc kubenswrapper[4786]: I0127 13:23:31.460372 4786 scope.go:117] "RemoveContainer" containerID="7c5f4fb442f3dd7f245974451e233e20fbe3b0235ade5971db67c9a87add07f2" Jan 27 13:23:31 crc kubenswrapper[4786]: E0127 13:23:31.460935 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c5f4fb442f3dd7f245974451e233e20fbe3b0235ade5971db67c9a87add07f2\": container with ID starting with 7c5f4fb442f3dd7f245974451e233e20fbe3b0235ade5971db67c9a87add07f2 not found: ID does not exist" containerID="7c5f4fb442f3dd7f245974451e233e20fbe3b0235ade5971db67c9a87add07f2" Jan 27 13:23:31 crc kubenswrapper[4786]: I0127 13:23:31.460980 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c5f4fb442f3dd7f245974451e233e20fbe3b0235ade5971db67c9a87add07f2"} err="failed to get container status \"7c5f4fb442f3dd7f245974451e233e20fbe3b0235ade5971db67c9a87add07f2\": rpc error: code = NotFound desc = could not find container \"7c5f4fb442f3dd7f245974451e233e20fbe3b0235ade5971db67c9a87add07f2\": container with ID starting with 7c5f4fb442f3dd7f245974451e233e20fbe3b0235ade5971db67c9a87add07f2 not found: ID does not exist" Jan 27 13:23:31 crc kubenswrapper[4786]: I0127 13:23:31.484618 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26ckx\" (UniqueName: \"kubernetes.io/projected/4875219b-aaa4-45f8-a5a8-7930dbd8894b-kube-api-access-26ckx\") pod \"4875219b-aaa4-45f8-a5a8-7930dbd8894b\" (UID: \"4875219b-aaa4-45f8-a5a8-7930dbd8894b\") " Jan 27 13:23:31 crc kubenswrapper[4786]: I0127 13:23:31.489685 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4875219b-aaa4-45f8-a5a8-7930dbd8894b-kube-api-access-26ckx" (OuterVolumeSpecName: "kube-api-access-26ckx") pod "4875219b-aaa4-45f8-a5a8-7930dbd8894b" (UID: "4875219b-aaa4-45f8-a5a8-7930dbd8894b"). InnerVolumeSpecName "kube-api-access-26ckx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:23:31 crc kubenswrapper[4786]: I0127 13:23:31.585785 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26ckx\" (UniqueName: \"kubernetes.io/projected/4875219b-aaa4-45f8-a5a8-7930dbd8894b-kube-api-access-26ckx\") on node \"crc\" DevicePath \"\"" Jan 27 13:23:31 crc kubenswrapper[4786]: I0127 13:23:31.764226 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-n7l9z"] Jan 27 13:23:31 crc kubenswrapper[4786]: I0127 13:23:31.768807 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-n7l9z"] Jan 27 13:23:32 crc kubenswrapper[4786]: I0127 13:23:32.185319 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7mc6w" Jan 27 13:23:32 crc kubenswrapper[4786]: I0127 13:23:32.186061 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7mc6w" Jan 27 13:23:32 crc kubenswrapper[4786]: I0127 13:23:32.210674 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7mc6w" Jan 27 13:23:33 crc kubenswrapper[4786]: I0127 13:23:33.471358 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4875219b-aaa4-45f8-a5a8-7930dbd8894b" path="/var/lib/kubelet/pods/4875219b-aaa4-45f8-a5a8-7930dbd8894b/volumes" Jan 27 13:23:39 crc kubenswrapper[4786]: I0127 13:23:39.532497 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:23:39 crc kubenswrapper[4786]: I0127 13:23:39.533027 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:23:39 crc kubenswrapper[4786]: I0127 13:23:39.533076 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:23:39 crc kubenswrapper[4786]: I0127 13:23:39.533823 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d62cb08eff3ad117221cbb57b9b2f848974ad39a63a811cd7c2c3452ac8780c"} pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:23:39 crc kubenswrapper[4786]: I0127 13:23:39.533887 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" containerID="cri-o://4d62cb08eff3ad117221cbb57b9b2f848974ad39a63a811cd7c2c3452ac8780c" gracePeriod=600 Jan 27 13:23:40 crc kubenswrapper[4786]: I0127 13:23:40.512105 4786 generic.go:334] "Generic (PLEG): container finished" podID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerID="4d62cb08eff3ad117221cbb57b9b2f848974ad39a63a811cd7c2c3452ac8780c" exitCode=0 Jan 27 13:23:40 crc kubenswrapper[4786]: I0127 13:23:40.512156 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerDied","Data":"4d62cb08eff3ad117221cbb57b9b2f848974ad39a63a811cd7c2c3452ac8780c"} Jan 27 13:23:40 crc kubenswrapper[4786]: I0127 13:23:40.512535 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerStarted","Data":"7a272494933f607d1d0ff23a4dfbd30c05a19b1fad6cb442bff9296b566a9151"} Jan 27 13:23:40 crc kubenswrapper[4786]: I0127 13:23:40.512561 4786 scope.go:117] "RemoveContainer" containerID="b4b187fcf6836625d29a0b0273148285cc0698ee8c6ac736d4d724e9062a8e84" Jan 27 13:23:42 crc kubenswrapper[4786]: I0127 13:23:42.210575 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7mc6w" Jan 27 13:23:49 crc kubenswrapper[4786]: I0127 13:23:49.374787 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb"] Jan 27 13:23:49 crc kubenswrapper[4786]: E0127 13:23:49.375565 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4875219b-aaa4-45f8-a5a8-7930dbd8894b" containerName="registry-server" Jan 27 13:23:49 crc kubenswrapper[4786]: I0127 13:23:49.375582 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="4875219b-aaa4-45f8-a5a8-7930dbd8894b" containerName="registry-server" Jan 27 13:23:49 crc kubenswrapper[4786]: I0127 13:23:49.375762 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="4875219b-aaa4-45f8-a5a8-7930dbd8894b" containerName="registry-server" Jan 27 13:23:49 crc kubenswrapper[4786]: I0127 13:23:49.376662 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" Jan 27 13:23:49 crc kubenswrapper[4786]: I0127 13:23:49.378474 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-nv428" Jan 27 13:23:49 crc kubenswrapper[4786]: I0127 13:23:49.384486 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb"] Jan 27 13:23:49 crc kubenswrapper[4786]: I0127 13:23:49.527124 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7091c8b9-67ad-488b-b8f1-3c24875d9436-bundle\") pod \"42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb\" (UID: \"7091c8b9-67ad-488b-b8f1-3c24875d9436\") " pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" Jan 27 13:23:49 crc kubenswrapper[4786]: I0127 13:23:49.527290 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7091c8b9-67ad-488b-b8f1-3c24875d9436-util\") pod \"42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb\" (UID: \"7091c8b9-67ad-488b-b8f1-3c24875d9436\") " pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" Jan 27 13:23:49 crc kubenswrapper[4786]: I0127 13:23:49.527536 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzbv4\" (UniqueName: \"kubernetes.io/projected/7091c8b9-67ad-488b-b8f1-3c24875d9436-kube-api-access-mzbv4\") pod \"42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb\" (UID: \"7091c8b9-67ad-488b-b8f1-3c24875d9436\") " pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" Jan 27 13:23:49 crc kubenswrapper[4786]: I0127 13:23:49.629103 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7091c8b9-67ad-488b-b8f1-3c24875d9436-bundle\") pod \"42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb\" (UID: \"7091c8b9-67ad-488b-b8f1-3c24875d9436\") " pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" Jan 27 13:23:49 crc kubenswrapper[4786]: I0127 13:23:49.629174 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7091c8b9-67ad-488b-b8f1-3c24875d9436-util\") pod \"42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb\" (UID: \"7091c8b9-67ad-488b-b8f1-3c24875d9436\") " pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" Jan 27 13:23:49 crc kubenswrapper[4786]: I0127 13:23:49.629250 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzbv4\" (UniqueName: \"kubernetes.io/projected/7091c8b9-67ad-488b-b8f1-3c24875d9436-kube-api-access-mzbv4\") pod \"42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb\" (UID: \"7091c8b9-67ad-488b-b8f1-3c24875d9436\") " pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" Jan 27 13:23:49 crc kubenswrapper[4786]: I0127 13:23:49.630040 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7091c8b9-67ad-488b-b8f1-3c24875d9436-bundle\") pod \"42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb\" (UID: \"7091c8b9-67ad-488b-b8f1-3c24875d9436\") " pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" Jan 27 13:23:49 crc kubenswrapper[4786]: I0127 13:23:49.630055 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7091c8b9-67ad-488b-b8f1-3c24875d9436-util\") pod \"42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb\" (UID: \"7091c8b9-67ad-488b-b8f1-3c24875d9436\") " pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" Jan 27 13:23:49 crc kubenswrapper[4786]: I0127 13:23:49.656499 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzbv4\" (UniqueName: \"kubernetes.io/projected/7091c8b9-67ad-488b-b8f1-3c24875d9436-kube-api-access-mzbv4\") pod \"42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb\" (UID: \"7091c8b9-67ad-488b-b8f1-3c24875d9436\") " pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" Jan 27 13:23:49 crc kubenswrapper[4786]: I0127 13:23:49.711580 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" Jan 27 13:23:50 crc kubenswrapper[4786]: I0127 13:23:50.115900 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb"] Jan 27 13:23:50 crc kubenswrapper[4786]: W0127 13:23:50.119803 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7091c8b9_67ad_488b_b8f1_3c24875d9436.slice/crio-79c03b6324ff15cc7b41b389d6272c34761fba252ce6ed221dd765bfdb3b9a42 WatchSource:0}: Error finding container 79c03b6324ff15cc7b41b389d6272c34761fba252ce6ed221dd765bfdb3b9a42: Status 404 returned error can't find the container with id 79c03b6324ff15cc7b41b389d6272c34761fba252ce6ed221dd765bfdb3b9a42 Jan 27 13:23:50 crc kubenswrapper[4786]: I0127 13:23:50.577131 4786 generic.go:334] "Generic (PLEG): container finished" podID="7091c8b9-67ad-488b-b8f1-3c24875d9436" containerID="c513750eb6587c08b94e3adb49a2d86df6ab3e805a37983c55e25481aa77a5da" exitCode=0 Jan 27 13:23:50 crc kubenswrapper[4786]: I0127 13:23:50.577420 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" event={"ID":"7091c8b9-67ad-488b-b8f1-3c24875d9436","Type":"ContainerDied","Data":"c513750eb6587c08b94e3adb49a2d86df6ab3e805a37983c55e25481aa77a5da"} Jan 27 13:23:50 crc kubenswrapper[4786]: I0127 13:23:50.577475 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" event={"ID":"7091c8b9-67ad-488b-b8f1-3c24875d9436","Type":"ContainerStarted","Data":"79c03b6324ff15cc7b41b389d6272c34761fba252ce6ed221dd765bfdb3b9a42"} Jan 27 13:23:53 crc kubenswrapper[4786]: I0127 13:23:53.606503 4786 generic.go:334] "Generic (PLEG): container finished" podID="7091c8b9-67ad-488b-b8f1-3c24875d9436" containerID="b6f074eb51201e20b7f254eb2d734058abd1dddf7535371e89690473f5ac20ed" exitCode=0 Jan 27 13:23:53 crc kubenswrapper[4786]: I0127 13:23:53.606577 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" event={"ID":"7091c8b9-67ad-488b-b8f1-3c24875d9436","Type":"ContainerDied","Data":"b6f074eb51201e20b7f254eb2d734058abd1dddf7535371e89690473f5ac20ed"} Jan 27 13:23:54 crc kubenswrapper[4786]: I0127 13:23:54.614259 4786 generic.go:334] "Generic (PLEG): container finished" podID="7091c8b9-67ad-488b-b8f1-3c24875d9436" containerID="9b7590ad5d74ca4fa072ea84a6518c92ddc5b6892ea9b307cc0f09f5a1f0f70b" exitCode=0 Jan 27 13:23:54 crc kubenswrapper[4786]: I0127 13:23:54.614310 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" event={"ID":"7091c8b9-67ad-488b-b8f1-3c24875d9436","Type":"ContainerDied","Data":"9b7590ad5d74ca4fa072ea84a6518c92ddc5b6892ea9b307cc0f09f5a1f0f70b"} Jan 27 13:23:55 crc kubenswrapper[4786]: I0127 13:23:55.873453 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" Jan 27 13:23:55 crc kubenswrapper[4786]: I0127 13:23:55.913449 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzbv4\" (UniqueName: \"kubernetes.io/projected/7091c8b9-67ad-488b-b8f1-3c24875d9436-kube-api-access-mzbv4\") pod \"7091c8b9-67ad-488b-b8f1-3c24875d9436\" (UID: \"7091c8b9-67ad-488b-b8f1-3c24875d9436\") " Jan 27 13:23:55 crc kubenswrapper[4786]: I0127 13:23:55.913746 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7091c8b9-67ad-488b-b8f1-3c24875d9436-util\") pod \"7091c8b9-67ad-488b-b8f1-3c24875d9436\" (UID: \"7091c8b9-67ad-488b-b8f1-3c24875d9436\") " Jan 27 13:23:55 crc kubenswrapper[4786]: I0127 13:23:55.913802 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7091c8b9-67ad-488b-b8f1-3c24875d9436-bundle\") pod \"7091c8b9-67ad-488b-b8f1-3c24875d9436\" (UID: \"7091c8b9-67ad-488b-b8f1-3c24875d9436\") " Jan 27 13:23:55 crc kubenswrapper[4786]: I0127 13:23:55.915009 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7091c8b9-67ad-488b-b8f1-3c24875d9436-bundle" (OuterVolumeSpecName: "bundle") pod "7091c8b9-67ad-488b-b8f1-3c24875d9436" (UID: "7091c8b9-67ad-488b-b8f1-3c24875d9436"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:23:55 crc kubenswrapper[4786]: I0127 13:23:55.919867 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7091c8b9-67ad-488b-b8f1-3c24875d9436-kube-api-access-mzbv4" (OuterVolumeSpecName: "kube-api-access-mzbv4") pod "7091c8b9-67ad-488b-b8f1-3c24875d9436" (UID: "7091c8b9-67ad-488b-b8f1-3c24875d9436"). InnerVolumeSpecName "kube-api-access-mzbv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:23:56 crc kubenswrapper[4786]: I0127 13:23:56.015812 4786 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7091c8b9-67ad-488b-b8f1-3c24875d9436-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:23:56 crc kubenswrapper[4786]: I0127 13:23:56.015846 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzbv4\" (UniqueName: \"kubernetes.io/projected/7091c8b9-67ad-488b-b8f1-3c24875d9436-kube-api-access-mzbv4\") on node \"crc\" DevicePath \"\"" Jan 27 13:23:56 crc kubenswrapper[4786]: I0127 13:23:56.209750 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7091c8b9-67ad-488b-b8f1-3c24875d9436-util" (OuterVolumeSpecName: "util") pod "7091c8b9-67ad-488b-b8f1-3c24875d9436" (UID: "7091c8b9-67ad-488b-b8f1-3c24875d9436"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:23:56 crc kubenswrapper[4786]: I0127 13:23:56.219091 4786 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7091c8b9-67ad-488b-b8f1-3c24875d9436-util\") on node \"crc\" DevicePath \"\"" Jan 27 13:23:56 crc kubenswrapper[4786]: I0127 13:23:56.626447 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" event={"ID":"7091c8b9-67ad-488b-b8f1-3c24875d9436","Type":"ContainerDied","Data":"79c03b6324ff15cc7b41b389d6272c34761fba252ce6ed221dd765bfdb3b9a42"} Jan 27 13:23:56 crc kubenswrapper[4786]: I0127 13:23:56.626491 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79c03b6324ff15cc7b41b389d6272c34761fba252ce6ed221dd765bfdb3b9a42" Jan 27 13:23:56 crc kubenswrapper[4786]: I0127 13:23:56.626552 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb" Jan 27 13:24:01 crc kubenswrapper[4786]: I0127 13:24:01.334743 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48"] Jan 27 13:24:01 crc kubenswrapper[4786]: E0127 13:24:01.335449 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7091c8b9-67ad-488b-b8f1-3c24875d9436" containerName="pull" Jan 27 13:24:01 crc kubenswrapper[4786]: I0127 13:24:01.335464 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7091c8b9-67ad-488b-b8f1-3c24875d9436" containerName="pull" Jan 27 13:24:01 crc kubenswrapper[4786]: E0127 13:24:01.335476 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7091c8b9-67ad-488b-b8f1-3c24875d9436" containerName="extract" Jan 27 13:24:01 crc kubenswrapper[4786]: I0127 13:24:01.335483 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7091c8b9-67ad-488b-b8f1-3c24875d9436" containerName="extract" Jan 27 13:24:01 crc kubenswrapper[4786]: E0127 13:24:01.335509 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7091c8b9-67ad-488b-b8f1-3c24875d9436" containerName="util" Jan 27 13:24:01 crc kubenswrapper[4786]: I0127 13:24:01.335517 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7091c8b9-67ad-488b-b8f1-3c24875d9436" containerName="util" Jan 27 13:24:01 crc kubenswrapper[4786]: I0127 13:24:01.335661 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="7091c8b9-67ad-488b-b8f1-3c24875d9436" containerName="extract" Jan 27 13:24:01 crc kubenswrapper[4786]: I0127 13:24:01.336065 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48" Jan 27 13:24:01 crc kubenswrapper[4786]: I0127 13:24:01.350158 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-f2dz5" Jan 27 13:24:01 crc kubenswrapper[4786]: I0127 13:24:01.353981 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48"] Jan 27 13:24:01 crc kubenswrapper[4786]: I0127 13:24:01.386748 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n8dj\" (UniqueName: \"kubernetes.io/projected/6be126f6-6357-4e54-b9a3-7f6a996bdc0c-kube-api-access-9n8dj\") pod \"openstack-operator-controller-init-7f8557b66c-vtv48\" (UID: \"6be126f6-6357-4e54-b9a3-7f6a996bdc0c\") " pod="openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48" Jan 27 13:24:01 crc kubenswrapper[4786]: I0127 13:24:01.488441 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n8dj\" (UniqueName: \"kubernetes.io/projected/6be126f6-6357-4e54-b9a3-7f6a996bdc0c-kube-api-access-9n8dj\") pod \"openstack-operator-controller-init-7f8557b66c-vtv48\" (UID: \"6be126f6-6357-4e54-b9a3-7f6a996bdc0c\") " pod="openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48" Jan 27 13:24:01 crc kubenswrapper[4786]: I0127 13:24:01.510754 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n8dj\" (UniqueName: \"kubernetes.io/projected/6be126f6-6357-4e54-b9a3-7f6a996bdc0c-kube-api-access-9n8dj\") pod \"openstack-operator-controller-init-7f8557b66c-vtv48\" (UID: \"6be126f6-6357-4e54-b9a3-7f6a996bdc0c\") " pod="openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48" Jan 27 13:24:01 crc kubenswrapper[4786]: I0127 13:24:01.657884 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48" Jan 27 13:24:01 crc kubenswrapper[4786]: W0127 13:24:01.963958 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6be126f6_6357_4e54_b9a3_7f6a996bdc0c.slice/crio-66f03e3125ca6e3aa79c0833c477b170e62744cb43fc3a68ea1f23a4a5f6c8f8 WatchSource:0}: Error finding container 66f03e3125ca6e3aa79c0833c477b170e62744cb43fc3a68ea1f23a4a5f6c8f8: Status 404 returned error can't find the container with id 66f03e3125ca6e3aa79c0833c477b170e62744cb43fc3a68ea1f23a4a5f6c8f8 Jan 27 13:24:01 crc kubenswrapper[4786]: I0127 13:24:01.971724 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48"] Jan 27 13:24:02 crc kubenswrapper[4786]: I0127 13:24:02.664870 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48" event={"ID":"6be126f6-6357-4e54-b9a3-7f6a996bdc0c","Type":"ContainerStarted","Data":"66f03e3125ca6e3aa79c0833c477b170e62744cb43fc3a68ea1f23a4a5f6c8f8"} Jan 27 13:24:06 crc kubenswrapper[4786]: I0127 13:24:06.688913 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48" event={"ID":"6be126f6-6357-4e54-b9a3-7f6a996bdc0c","Type":"ContainerStarted","Data":"77fe1abbfd88d4bdf51a8270e582074210a32cabb3b31931d8631e758bb48fb6"} Jan 27 13:24:06 crc kubenswrapper[4786]: I0127 13:24:06.689286 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48" Jan 27 13:24:06 crc kubenswrapper[4786]: I0127 13:24:06.720666 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48" podStartSLOduration=1.198741178 podStartE2EDuration="5.72065133s" podCreationTimestamp="2026-01-27 13:24:01 +0000 UTC" firstStartedPulling="2026-01-27 13:24:01.96585034 +0000 UTC m=+1025.176464459" lastFinishedPulling="2026-01-27 13:24:06.487760492 +0000 UTC m=+1029.698374611" observedRunningTime="2026-01-27 13:24:06.71700817 +0000 UTC m=+1029.927622299" watchObservedRunningTime="2026-01-27 13:24:06.72065133 +0000 UTC m=+1029.931265449" Jan 27 13:24:11 crc kubenswrapper[4786]: I0127 13:24:11.660411 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.554319 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-5mpdp"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.555534 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5mpdp" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.561130 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-cpn89"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.562054 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-cpn89" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.566484 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-nb95m" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.566789 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-c4qt7" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.572308 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-w6r6x"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.573245 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-w6r6x" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.579384 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-cb8qh" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.582330 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-5mpdp"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.583912 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvv5w\" (UniqueName: \"kubernetes.io/projected/a5ebc5e9-fc27-4326-927a-c791f35a71e9-kube-api-access-bvv5w\") pod \"barbican-operator-controller-manager-7f86f8796f-5mpdp\" (UID: \"a5ebc5e9-fc27-4326-927a-c791f35a71e9\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5mpdp" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.602534 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-cpn89"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.614891 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-w6r6x"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.665378 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fwlxc"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.666860 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fwlxc" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.670071 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-nkpq6" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.685293 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk9jx\" (UniqueName: \"kubernetes.io/projected/37986633-7f55-41aa-b83d-7f74a5640f2f-kube-api-access-lk9jx\") pod \"cinder-operator-controller-manager-7478f7dbf9-cpn89\" (UID: \"37986633-7f55-41aa-b83d-7f74a5640f2f\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-cpn89" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.685596 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvv5w\" (UniqueName: \"kubernetes.io/projected/a5ebc5e9-fc27-4326-927a-c791f35a71e9-kube-api-access-bvv5w\") pod \"barbican-operator-controller-manager-7f86f8796f-5mpdp\" (UID: \"a5ebc5e9-fc27-4326-927a-c791f35a71e9\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5mpdp" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.685708 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4f7r\" (UniqueName: \"kubernetes.io/projected/84881c35-4d84-4c91-b401-0bf1d7de9314-kube-api-access-x4f7r\") pod \"designate-operator-controller-manager-b45d7bf98-w6r6x\" (UID: \"84881c35-4d84-4c91-b401-0bf1d7de9314\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-w6r6x" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.688903 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-z5xnb"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.689893 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-z5xnb" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.694960 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-jn85p" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.699075 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fwlxc"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.710007 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-z5xnb"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.727051 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvv5w\" (UniqueName: \"kubernetes.io/projected/a5ebc5e9-fc27-4326-927a-c791f35a71e9-kube-api-access-bvv5w\") pod \"barbican-operator-controller-manager-7f86f8796f-5mpdp\" (UID: \"a5ebc5e9-fc27-4326-927a-c791f35a71e9\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5mpdp" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.741271 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnmcp"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.745565 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnmcp" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.749094 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jbwqr" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.790395 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk9jx\" (UniqueName: \"kubernetes.io/projected/37986633-7f55-41aa-b83d-7f74a5640f2f-kube-api-access-lk9jx\") pod \"cinder-operator-controller-manager-7478f7dbf9-cpn89\" (UID: \"37986633-7f55-41aa-b83d-7f74a5640f2f\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-cpn89" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.790461 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqpbl\" (UniqueName: \"kubernetes.io/projected/7e2710fc-7453-40a9-81c0-ccec15d86a77-kube-api-access-rqpbl\") pod \"horizon-operator-controller-manager-77d5c5b54f-wnmcp\" (UID: \"7e2710fc-7453-40a9-81c0-ccec15d86a77\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnmcp" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.790512 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cld9b\" (UniqueName: \"kubernetes.io/projected/e607a576-31c5-4ef7-82ea-66851b5a33d2-kube-api-access-cld9b\") pod \"heat-operator-controller-manager-594c8c9d5d-fwlxc\" (UID: \"e607a576-31c5-4ef7-82ea-66851b5a33d2\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fwlxc" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.790580 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4f7r\" (UniqueName: \"kubernetes.io/projected/84881c35-4d84-4c91-b401-0bf1d7de9314-kube-api-access-x4f7r\") pod \"designate-operator-controller-manager-b45d7bf98-w6r6x\" (UID: \"84881c35-4d84-4c91-b401-0bf1d7de9314\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-w6r6x" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.790629 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc2q6\" (UniqueName: \"kubernetes.io/projected/4a240cd4-c49f-4716-80bb-6d1ba632a32c-kube-api-access-pc2q6\") pod \"glance-operator-controller-manager-78fdd796fd-z5xnb\" (UID: \"4a240cd4-c49f-4716-80bb-6d1ba632a32c\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-z5xnb" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.819973 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnmcp"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.828808 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.829764 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.829783 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk9jx\" (UniqueName: \"kubernetes.io/projected/37986633-7f55-41aa-b83d-7f74a5640f2f-kube-api-access-lk9jx\") pod \"cinder-operator-controller-manager-7478f7dbf9-cpn89\" (UID: \"37986633-7f55-41aa-b83d-7f74a5640f2f\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-cpn89" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.832468 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.832542 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-pmfkd" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.843671 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-lspl5"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.844635 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lspl5" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.844788 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4f7r\" (UniqueName: \"kubernetes.io/projected/84881c35-4d84-4c91-b401-0bf1d7de9314-kube-api-access-x4f7r\") pod \"designate-operator-controller-manager-b45d7bf98-w6r6x\" (UID: \"84881c35-4d84-4c91-b401-0bf1d7de9314\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-w6r6x" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.854966 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-tm94r" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.864031 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.877691 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-6p8s9"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.878564 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6p8s9" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.881220 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5mpdp" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.888042 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-cdbdf" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.896130 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqpbl\" (UniqueName: \"kubernetes.io/projected/7e2710fc-7453-40a9-81c0-ccec15d86a77-kube-api-access-rqpbl\") pod \"horizon-operator-controller-manager-77d5c5b54f-wnmcp\" (UID: \"7e2710fc-7453-40a9-81c0-ccec15d86a77\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnmcp" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.896191 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cld9b\" (UniqueName: \"kubernetes.io/projected/e607a576-31c5-4ef7-82ea-66851b5a33d2-kube-api-access-cld9b\") pod \"heat-operator-controller-manager-594c8c9d5d-fwlxc\" (UID: \"e607a576-31c5-4ef7-82ea-66851b5a33d2\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fwlxc" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.896246 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert\") pod \"infra-operator-controller-manager-694cf4f878-xdrln\" (UID: \"43d55b3f-3bd8-4083-9e0d-f398938a47e6\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.896279 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf76b\" (UniqueName: \"kubernetes.io/projected/cfc6eb47-18a2-442a-a1d8-ddec61462156-kube-api-access-nf76b\") pod \"ironic-operator-controller-manager-598f7747c9-lspl5\" (UID: \"cfc6eb47-18a2-442a-a1d8-ddec61462156\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lspl5" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.896328 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc2q6\" (UniqueName: \"kubernetes.io/projected/4a240cd4-c49f-4716-80bb-6d1ba632a32c-kube-api-access-pc2q6\") pod \"glance-operator-controller-manager-78fdd796fd-z5xnb\" (UID: \"4a240cd4-c49f-4716-80bb-6d1ba632a32c\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-z5xnb" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.896360 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6jnj\" (UniqueName: \"kubernetes.io/projected/43d55b3f-3bd8-4083-9e0d-f398938a47e6-kube-api-access-g6jnj\") pod \"infra-operator-controller-manager-694cf4f878-xdrln\" (UID: \"43d55b3f-3bd8-4083-9e0d-f398938a47e6\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.903771 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-lspl5"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.936262 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cld9b\" (UniqueName: \"kubernetes.io/projected/e607a576-31c5-4ef7-82ea-66851b5a33d2-kube-api-access-cld9b\") pod \"heat-operator-controller-manager-594c8c9d5d-fwlxc\" (UID: \"e607a576-31c5-4ef7-82ea-66851b5a33d2\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fwlxc" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.936430 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-6p8s9"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.937150 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-cpn89" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.944522 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-c2c8q"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.945236 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqpbl\" (UniqueName: \"kubernetes.io/projected/7e2710fc-7453-40a9-81c0-ccec15d86a77-kube-api-access-rqpbl\") pod \"horizon-operator-controller-manager-77d5c5b54f-wnmcp\" (UID: \"7e2710fc-7453-40a9-81c0-ccec15d86a77\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnmcp" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.945486 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc2q6\" (UniqueName: \"kubernetes.io/projected/4a240cd4-c49f-4716-80bb-6d1ba632a32c-kube-api-access-pc2q6\") pod \"glance-operator-controller-manager-78fdd796fd-z5xnb\" (UID: \"4a240cd4-c49f-4716-80bb-6d1ba632a32c\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-z5xnb" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.946111 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-c2c8q" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.949961 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-w6r6x" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.952053 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.953058 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.957652 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-x5d7w" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.962274 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-c2c8q"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.963292 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-kgnfg" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.982332 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-m75zn"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.983199 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m75zn" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.986760 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-w74xj" Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.986922 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z"] Jan 27 13:24:30 crc kubenswrapper[4786]: I0127 13:24:30.998695 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fwlxc" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.008674 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6jnj\" (UniqueName: \"kubernetes.io/projected/43d55b3f-3bd8-4083-9e0d-f398938a47e6-kube-api-access-g6jnj\") pod \"infra-operator-controller-manager-694cf4f878-xdrln\" (UID: \"43d55b3f-3bd8-4083-9e0d-f398938a47e6\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.008734 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-962v5\" (UniqueName: \"kubernetes.io/projected/4058f919-d5c7-4f73-9c8a-432409f9022a-kube-api-access-962v5\") pod \"manila-operator-controller-manager-78c6999f6f-c2c8q\" (UID: \"4058f919-d5c7-4f73-9c8a-432409f9022a\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-c2c8q" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.008789 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb7n8\" (UniqueName: \"kubernetes.io/projected/88c4fa6a-bb1a-46fe-a863-473b9ec66ce7-kube-api-access-vb7n8\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z\" (UID: \"88c4fa6a-bb1a-46fe-a863-473b9ec66ce7\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.008826 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert\") pod \"infra-operator-controller-manager-694cf4f878-xdrln\" (UID: \"43d55b3f-3bd8-4083-9e0d-f398938a47e6\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.008854 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf76b\" (UniqueName: \"kubernetes.io/projected/cfc6eb47-18a2-442a-a1d8-ddec61462156-kube-api-access-nf76b\") pod \"ironic-operator-controller-manager-598f7747c9-lspl5\" (UID: \"cfc6eb47-18a2-442a-a1d8-ddec61462156\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lspl5" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.008881 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw7xd\" (UniqueName: \"kubernetes.io/projected/5614e239-8fc3-4091-aad4-55a217ca1092-kube-api-access-hw7xd\") pod \"keystone-operator-controller-manager-b8b6d4659-6p8s9\" (UID: \"5614e239-8fc3-4091-aad4-55a217ca1092\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6p8s9" Jan 27 13:24:31 crc kubenswrapper[4786]: E0127 13:24:31.009252 4786 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 13:24:31 crc kubenswrapper[4786]: E0127 13:24:31.009300 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert podName:43d55b3f-3bd8-4083-9e0d-f398938a47e6 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:31.509285021 +0000 UTC m=+1054.719899140 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert") pod "infra-operator-controller-manager-694cf4f878-xdrln" (UID: "43d55b3f-3bd8-4083-9e0d-f398938a47e6") : secret "infra-operator-webhook-server-cert" not found Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.012545 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-m75zn"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.038557 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.039629 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.040564 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf76b\" (UniqueName: \"kubernetes.io/projected/cfc6eb47-18a2-442a-a1d8-ddec61462156-kube-api-access-nf76b\") pod \"ironic-operator-controller-manager-598f7747c9-lspl5\" (UID: \"cfc6eb47-18a2-442a-a1d8-ddec61462156\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lspl5" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.042313 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-z5xnb" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.052488 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cg2f9" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.052590 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6jnj\" (UniqueName: \"kubernetes.io/projected/43d55b3f-3bd8-4083-9e0d-f398938a47e6-kube-api-access-g6jnj\") pod \"infra-operator-controller-manager-694cf4f878-xdrln\" (UID: \"43d55b3f-3bd8-4083-9e0d-f398938a47e6\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.072009 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-pb5lk"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.073649 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-pb5lk" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.075551 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-b2894" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.109860 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnmcp" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.111228 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw7xd\" (UniqueName: \"kubernetes.io/projected/5614e239-8fc3-4091-aad4-55a217ca1092-kube-api-access-hw7xd\") pod \"keystone-operator-controller-manager-b8b6d4659-6p8s9\" (UID: \"5614e239-8fc3-4091-aad4-55a217ca1092\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6p8s9" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.111303 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx572\" (UniqueName: \"kubernetes.io/projected/1eed4ae8-357f-4388-a9d3-9382b0fc84ec-kube-api-access-wx572\") pod \"neutron-operator-controller-manager-78d58447c5-m75zn\" (UID: \"1eed4ae8-357f-4388-a9d3-9382b0fc84ec\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m75zn" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.111405 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfv29\" (UniqueName: \"kubernetes.io/projected/5c18c2c6-04e6-4b87-b92d-586823b20ac1-kube-api-access-rfv29\") pod \"octavia-operator-controller-manager-5f4cd88d46-pb5lk\" (UID: \"5c18c2c6-04e6-4b87-b92d-586823b20ac1\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-pb5lk" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.111480 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-962v5\" (UniqueName: \"kubernetes.io/projected/4058f919-d5c7-4f73-9c8a-432409f9022a-kube-api-access-962v5\") pod \"manila-operator-controller-manager-78c6999f6f-c2c8q\" (UID: \"4058f919-d5c7-4f73-9c8a-432409f9022a\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-c2c8q" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.111555 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb7n8\" (UniqueName: \"kubernetes.io/projected/88c4fa6a-bb1a-46fe-a863-473b9ec66ce7-kube-api-access-vb7n8\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z\" (UID: \"88c4fa6a-bb1a-46fe-a863-473b9ec66ce7\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.113571 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqzlg\" (UniqueName: \"kubernetes.io/projected/1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43-kube-api-access-nqzlg\") pod \"nova-operator-controller-manager-6cffd64fd8-cgnm5\" (UID: \"1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43\") " pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.118779 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-pb5lk"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.129670 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.136076 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb7n8\" (UniqueName: \"kubernetes.io/projected/88c4fa6a-bb1a-46fe-a863-473b9ec66ce7-kube-api-access-vb7n8\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z\" (UID: \"88c4fa6a-bb1a-46fe-a863-473b9ec66ce7\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.136754 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw7xd\" (UniqueName: \"kubernetes.io/projected/5614e239-8fc3-4091-aad4-55a217ca1092-kube-api-access-hw7xd\") pod \"keystone-operator-controller-manager-b8b6d4659-6p8s9\" (UID: \"5614e239-8fc3-4091-aad4-55a217ca1092\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6p8s9" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.147132 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-962v5\" (UniqueName: \"kubernetes.io/projected/4058f919-d5c7-4f73-9c8a-432409f9022a-kube-api-access-962v5\") pod \"manila-operator-controller-manager-78c6999f6f-c2c8q\" (UID: \"4058f919-d5c7-4f73-9c8a-432409f9022a\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-c2c8q" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.172873 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-cv96v"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.173870 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cv96v" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.177133 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-pf6xw" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.179695 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-ghr9t"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.181243 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ghr9t" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.183250 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-pwwww" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.190984 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.199073 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.200859 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.201591 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xqwhh" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.213012 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-ghr9t"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.214955 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtxdw\" (UniqueName: \"kubernetes.io/projected/42803b12-da36-48df-b9bb-ed3d4555b7b4-kube-api-access-jtxdw\") pod \"ovn-operator-controller-manager-6f75f45d54-cv96v\" (UID: \"42803b12-da36-48df-b9bb-ed3d4555b7b4\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cv96v" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.216031 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx572\" (UniqueName: \"kubernetes.io/projected/1eed4ae8-357f-4388-a9d3-9382b0fc84ec-kube-api-access-wx572\") pod \"neutron-operator-controller-manager-78d58447c5-m75zn\" (UID: \"1eed4ae8-357f-4388-a9d3-9382b0fc84ec\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m75zn" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.216064 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfv29\" (UniqueName: \"kubernetes.io/projected/5c18c2c6-04e6-4b87-b92d-586823b20ac1-kube-api-access-rfv29\") pod \"octavia-operator-controller-manager-5f4cd88d46-pb5lk\" (UID: \"5c18c2c6-04e6-4b87-b92d-586823b20ac1\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-pb5lk" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.216203 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqzlg\" (UniqueName: \"kubernetes.io/projected/1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43-kube-api-access-nqzlg\") pod \"nova-operator-controller-manager-6cffd64fd8-cgnm5\" (UID: \"1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43\") " pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.224695 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-cv96v"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.238621 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.239748 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqzlg\" (UniqueName: \"kubernetes.io/projected/1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43-kube-api-access-nqzlg\") pod \"nova-operator-controller-manager-6cffd64fd8-cgnm5\" (UID: \"1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43\") " pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.247794 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfv29\" (UniqueName: \"kubernetes.io/projected/5c18c2c6-04e6-4b87-b92d-586823b20ac1-kube-api-access-rfv29\") pod \"octavia-operator-controller-manager-5f4cd88d46-pb5lk\" (UID: \"5c18c2c6-04e6-4b87-b92d-586823b20ac1\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-pb5lk" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.247834 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx572\" (UniqueName: \"kubernetes.io/projected/1eed4ae8-357f-4388-a9d3-9382b0fc84ec-kube-api-access-wx572\") pod \"neutron-operator-controller-manager-78d58447c5-m75zn\" (UID: \"1eed4ae8-357f-4388-a9d3-9382b0fc84ec\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m75zn" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.264449 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lspl5" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.275776 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-qmp6d"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.276688 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qmp6d" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.280015 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7nd7v" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.282707 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fkbbc"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.284352 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6p8s9" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.288131 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fkbbc" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.290502 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-qxl47" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.319667 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtxdw\" (UniqueName: \"kubernetes.io/projected/42803b12-da36-48df-b9bb-ed3d4555b7b4-kube-api-access-jtxdw\") pod \"ovn-operator-controller-manager-6f75f45d54-cv96v\" (UID: \"42803b12-da36-48df-b9bb-ed3d4555b7b4\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cv96v" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.320031 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k5xj\" (UniqueName: \"kubernetes.io/projected/9d2d7f2c-4522-45bf-a12d-1eb7cc11041e-kube-api-access-6k5xj\") pod \"swift-operator-controller-manager-547cbdb99f-qmp6d\" (UID: \"9d2d7f2c-4522-45bf-a12d-1eb7cc11041e\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qmp6d" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.320269 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8c6r\" (UniqueName: \"kubernetes.io/projected/45599804-69cd-44f0-bb76-a15e5a3ff700-kube-api-access-v8c6r\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt\" (UID: \"45599804-69cd-44f0-bb76-a15e5a3ff700\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.320360 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2f58\" (UniqueName: \"kubernetes.io/projected/250d0fff-09d2-49be-94a3-6eefdd3aab06-kube-api-access-v2f58\") pod \"placement-operator-controller-manager-79d5ccc684-ghr9t\" (UID: \"250d0fff-09d2-49be-94a3-6eefdd3aab06\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ghr9t" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.320494 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt\" (UID: \"45599804-69cd-44f0-bb76-a15e5a3ff700\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.338564 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-c2c8q" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.355352 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtxdw\" (UniqueName: \"kubernetes.io/projected/42803b12-da36-48df-b9bb-ed3d4555b7b4-kube-api-access-jtxdw\") pod \"ovn-operator-controller-manager-6f75f45d54-cv96v\" (UID: \"42803b12-da36-48df-b9bb-ed3d4555b7b4\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cv96v" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.371298 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.381159 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m75zn" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.392030 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.413034 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-pb5lk" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.421394 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgxcb\" (UniqueName: \"kubernetes.io/projected/39341414-eb82-400d-96ce-e546dd32d15b-kube-api-access-tgxcb\") pod \"telemetry-operator-controller-manager-85cd9769bb-fkbbc\" (UID: \"39341414-eb82-400d-96ce-e546dd32d15b\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fkbbc" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.421455 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2f58\" (UniqueName: \"kubernetes.io/projected/250d0fff-09d2-49be-94a3-6eefdd3aab06-kube-api-access-v2f58\") pod \"placement-operator-controller-manager-79d5ccc684-ghr9t\" (UID: \"250d0fff-09d2-49be-94a3-6eefdd3aab06\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ghr9t" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.421498 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt\" (UID: \"45599804-69cd-44f0-bb76-a15e5a3ff700\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.421562 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k5xj\" (UniqueName: \"kubernetes.io/projected/9d2d7f2c-4522-45bf-a12d-1eb7cc11041e-kube-api-access-6k5xj\") pod \"swift-operator-controller-manager-547cbdb99f-qmp6d\" (UID: \"9d2d7f2c-4522-45bf-a12d-1eb7cc11041e\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qmp6d" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.421953 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8c6r\" (UniqueName: \"kubernetes.io/projected/45599804-69cd-44f0-bb76-a15e5a3ff700-kube-api-access-v8c6r\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt\" (UID: \"45599804-69cd-44f0-bb76-a15e5a3ff700\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" Jan 27 13:24:31 crc kubenswrapper[4786]: E0127 13:24:31.421975 4786 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 13:24:31 crc kubenswrapper[4786]: E0127 13:24:31.422016 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert podName:45599804-69cd-44f0-bb76-a15e5a3ff700 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:31.922003026 +0000 UTC m=+1055.132617145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" (UID: "45599804-69cd-44f0-bb76-a15e5a3ff700") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.429269 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-qmp6d"] Jan 27 13:24:31 crc kubenswrapper[4786]: W0127 13:24:31.442752 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37986633_7f55_41aa_b83d_7f74a5640f2f.slice/crio-0ba690b4e68c2b73588dd5329bcc016f89d6cc65c452bb5e414a2177302040d0 WatchSource:0}: Error finding container 0ba690b4e68c2b73588dd5329bcc016f89d6cc65c452bb5e414a2177302040d0: Status 404 returned error can't find the container with id 0ba690b4e68c2b73588dd5329bcc016f89d6cc65c452bb5e414a2177302040d0 Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.446368 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8c6r\" (UniqueName: \"kubernetes.io/projected/45599804-69cd-44f0-bb76-a15e5a3ff700-kube-api-access-v8c6r\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt\" (UID: \"45599804-69cd-44f0-bb76-a15e5a3ff700\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.447531 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2f58\" (UniqueName: \"kubernetes.io/projected/250d0fff-09d2-49be-94a3-6eefdd3aab06-kube-api-access-v2f58\") pod \"placement-operator-controller-manager-79d5ccc684-ghr9t\" (UID: \"250d0fff-09d2-49be-94a3-6eefdd3aab06\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ghr9t" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.448625 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-8whbf"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.450165 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8whbf" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.452569 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k5xj\" (UniqueName: \"kubernetes.io/projected/9d2d7f2c-4522-45bf-a12d-1eb7cc11041e-kube-api-access-6k5xj\") pod \"swift-operator-controller-manager-547cbdb99f-qmp6d\" (UID: \"9d2d7f2c-4522-45bf-a12d-1eb7cc11041e\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qmp6d" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.454231 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-tr9tx" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.504076 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-8whbf"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.510790 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fkbbc"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.524143 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgxcb\" (UniqueName: \"kubernetes.io/projected/39341414-eb82-400d-96ce-e546dd32d15b-kube-api-access-tgxcb\") pod \"telemetry-operator-controller-manager-85cd9769bb-fkbbc\" (UID: \"39341414-eb82-400d-96ce-e546dd32d15b\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fkbbc" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.524233 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mj9l\" (UniqueName: \"kubernetes.io/projected/7732c732-60c5-476d-bf01-ed83c38b4d35-kube-api-access-2mj9l\") pod \"test-operator-controller-manager-69797bbcbd-8whbf\" (UID: \"7732c732-60c5-476d-bf01-ed83c38b4d35\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8whbf" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.524273 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert\") pod \"infra-operator-controller-manager-694cf4f878-xdrln\" (UID: \"43d55b3f-3bd8-4083-9e0d-f398938a47e6\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" Jan 27 13:24:31 crc kubenswrapper[4786]: E0127 13:24:31.524644 4786 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 13:24:31 crc kubenswrapper[4786]: E0127 13:24:31.524859 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert podName:43d55b3f-3bd8-4083-9e0d-f398938a47e6 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:32.524841646 +0000 UTC m=+1055.735455765 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert") pod "infra-operator-controller-manager-694cf4f878-xdrln" (UID: "43d55b3f-3bd8-4083-9e0d-f398938a47e6") : secret "infra-operator-webhook-server-cert" not found Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.531379 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cv96v" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.549200 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ghr9t" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.555035 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-q7glc"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.555983 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-q7glc" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.560445 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-t4cf8" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.571948 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-q7glc"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.580360 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgxcb\" (UniqueName: \"kubernetes.io/projected/39341414-eb82-400d-96ce-e546dd32d15b-kube-api-access-tgxcb\") pod \"telemetry-operator-controller-manager-85cd9769bb-fkbbc\" (UID: \"39341414-eb82-400d-96ce-e546dd32d15b\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fkbbc" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.624922 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qmp6d" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.625778 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnddg\" (UniqueName: \"kubernetes.io/projected/f7bba046-60b2-4fa4-96ec-976f73b1ff7c-kube-api-access-wnddg\") pod \"watcher-operator-controller-manager-564965969-q7glc\" (UID: \"f7bba046-60b2-4fa4-96ec-976f73b1ff7c\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-q7glc" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.625846 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mj9l\" (UniqueName: \"kubernetes.io/projected/7732c732-60c5-476d-bf01-ed83c38b4d35-kube-api-access-2mj9l\") pod \"test-operator-controller-manager-69797bbcbd-8whbf\" (UID: \"7732c732-60c5-476d-bf01-ed83c38b4d35\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8whbf" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.669192 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fkbbc" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.672430 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mj9l\" (UniqueName: \"kubernetes.io/projected/7732c732-60c5-476d-bf01-ed83c38b4d35-kube-api-access-2mj9l\") pod \"test-operator-controller-manager-69797bbcbd-8whbf\" (UID: \"7732c732-60c5-476d-bf01-ed83c38b4d35\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8whbf" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.710875 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.711844 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.715084 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.715189 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.715367 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-w66v8" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.729396 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnddg\" (UniqueName: \"kubernetes.io/projected/f7bba046-60b2-4fa4-96ec-976f73b1ff7c-kube-api-access-wnddg\") pod \"watcher-operator-controller-manager-564965969-q7glc\" (UID: \"f7bba046-60b2-4fa4-96ec-976f73b1ff7c\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-q7glc" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.747069 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.765528 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnddg\" (UniqueName: \"kubernetes.io/projected/f7bba046-60b2-4fa4-96ec-976f73b1ff7c-kube-api-access-wnddg\") pod \"watcher-operator-controller-manager-564965969-q7glc\" (UID: \"f7bba046-60b2-4fa4-96ec-976f73b1ff7c\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-q7glc" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.777427 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8whbf" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.790597 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-km9zd"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.792285 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-km9zd" Jan 27 13:24:31 crc kubenswrapper[4786]: W0127 13:24:31.792627 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode607a576_31c5_4ef7_82ea_66851b5a33d2.slice/crio-fd2e27b7f911bfe39624e10912a4990b2439b0a196d90cc6ec44ab3c02e55736 WatchSource:0}: Error finding container fd2e27b7f911bfe39624e10912a4990b2439b0a196d90cc6ec44ab3c02e55736: Status 404 returned error can't find the container with id fd2e27b7f911bfe39624e10912a4990b2439b0a196d90cc6ec44ab3c02e55736 Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.794235 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jqdl6" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.803308 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-km9zd"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.814266 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-cpn89"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.830238 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z56dg\" (UniqueName: \"kubernetes.io/projected/d7f033ce-43f9-425f-a74c-65735b66f5b8-kube-api-access-z56dg\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.830289 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.830308 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.830401 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnv2h\" (UniqueName: \"kubernetes.io/projected/fd3a1177-720b-4e0d-83d9-9ea046369690-kube-api-access-dnv2h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-km9zd\" (UID: \"fd3a1177-720b-4e0d-83d9-9ea046369690\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-km9zd" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.833478 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-5mpdp"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.850676 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-z5xnb"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.859531 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fwlxc"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.871541 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-z5xnb" event={"ID":"4a240cd4-c49f-4716-80bb-6d1ba632a32c","Type":"ContainerStarted","Data":"ae5f18ab097efeb9eab3354c863afb6f5e1153278e02c43fb32e5f9ca09f21b3"} Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.883482 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fwlxc" event={"ID":"e607a576-31c5-4ef7-82ea-66851b5a33d2","Type":"ContainerStarted","Data":"fd2e27b7f911bfe39624e10912a4990b2439b0a196d90cc6ec44ab3c02e55736"} Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.885454 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-w6r6x"] Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.887647 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5mpdp" event={"ID":"a5ebc5e9-fc27-4326-927a-c791f35a71e9","Type":"ContainerStarted","Data":"e32aaadf1e8a5975e7624480fbf503c190c2f0ba22df574ba5f2ceb88b435ff3"} Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.889197 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-cpn89" event={"ID":"37986633-7f55-41aa-b83d-7f74a5640f2f","Type":"ContainerStarted","Data":"0ba690b4e68c2b73588dd5329bcc016f89d6cc65c452bb5e414a2177302040d0"} Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.931584 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnv2h\" (UniqueName: \"kubernetes.io/projected/fd3a1177-720b-4e0d-83d9-9ea046369690-kube-api-access-dnv2h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-km9zd\" (UID: \"fd3a1177-720b-4e0d-83d9-9ea046369690\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-km9zd" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.931658 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt\" (UID: \"45599804-69cd-44f0-bb76-a15e5a3ff700\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.931752 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z56dg\" (UniqueName: \"kubernetes.io/projected/d7f033ce-43f9-425f-a74c-65735b66f5b8-kube-api-access-z56dg\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.931795 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.931818 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:31 crc kubenswrapper[4786]: E0127 13:24:31.931979 4786 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 13:24:31 crc kubenswrapper[4786]: E0127 13:24:31.932030 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs podName:d7f033ce-43f9-425f-a74c-65735b66f5b8 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:32.432012559 +0000 UTC m=+1055.642626678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs") pod "openstack-operator-controller-manager-596b94879d-2gz6v" (UID: "d7f033ce-43f9-425f-a74c-65735b66f5b8") : secret "metrics-server-cert" not found Jan 27 13:24:31 crc kubenswrapper[4786]: E0127 13:24:31.932139 4786 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 13:24:31 crc kubenswrapper[4786]: E0127 13:24:31.932169 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert podName:45599804-69cd-44f0-bb76-a15e5a3ff700 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:32.932159703 +0000 UTC m=+1056.142773822 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" (UID: "45599804-69cd-44f0-bb76-a15e5a3ff700") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 13:24:31 crc kubenswrapper[4786]: E0127 13:24:31.932377 4786 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 13:24:31 crc kubenswrapper[4786]: E0127 13:24:31.932408 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs podName:d7f033ce-43f9-425f-a74c-65735b66f5b8 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:32.432399171 +0000 UTC m=+1055.643013290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs") pod "openstack-operator-controller-manager-596b94879d-2gz6v" (UID: "d7f033ce-43f9-425f-a74c-65735b66f5b8") : secret "webhook-server-cert" not found Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.949177 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-q7glc" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.951032 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z56dg\" (UniqueName: \"kubernetes.io/projected/d7f033ce-43f9-425f-a74c-65735b66f5b8-kube-api-access-z56dg\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.953136 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnv2h\" (UniqueName: \"kubernetes.io/projected/fd3a1177-720b-4e0d-83d9-9ea046369690-kube-api-access-dnv2h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-km9zd\" (UID: \"fd3a1177-720b-4e0d-83d9-9ea046369690\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-km9zd" Jan 27 13:24:31 crc kubenswrapper[4786]: I0127 13:24:31.956908 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-km9zd" Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.001385 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnmcp"] Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.029281 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-lspl5"] Jan 27 13:24:32 crc kubenswrapper[4786]: W0127 13:24:32.058081 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfc6eb47_18a2_442a_a1d8_ddec61462156.slice/crio-cab6f6ceff2c7f06e8af00ff4620ba2c758c4611952a3a87ef19c8111feac62b WatchSource:0}: Error finding container cab6f6ceff2c7f06e8af00ff4620ba2c758c4611952a3a87ef19c8111feac62b: Status 404 returned error can't find the container with id cab6f6ceff2c7f06e8af00ff4620ba2c758c4611952a3a87ef19c8111feac62b Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.158754 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-6p8s9"] Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.371729 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-m75zn"] Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.383512 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-pb5lk"] Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.417493 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-c2c8q"] Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.439139 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.439200 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.439373 4786 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.439417 4786 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.439486 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs podName:d7f033ce-43f9-425f-a74c-65735b66f5b8 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:33.439462113 +0000 UTC m=+1056.650076272 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs") pod "openstack-operator-controller-manager-596b94879d-2gz6v" (UID: "d7f033ce-43f9-425f-a74c-65735b66f5b8") : secret "webhook-server-cert" not found Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.439508 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs podName:d7f033ce-43f9-425f-a74c-65735b66f5b8 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:33.439499554 +0000 UTC m=+1056.650113753 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs") pod "openstack-operator-controller-manager-596b94879d-2gz6v" (UID: "d7f033ce-43f9-425f-a74c-65735b66f5b8") : secret "metrics-server-cert" not found Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.540623 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert\") pod \"infra-operator-controller-manager-694cf4f878-xdrln\" (UID: \"43d55b3f-3bd8-4083-9e0d-f398938a47e6\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.540742 4786 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.540800 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert podName:43d55b3f-3bd8-4083-9e0d-f398938a47e6 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:34.540784671 +0000 UTC m=+1057.751398790 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert") pod "infra-operator-controller-manager-694cf4f878-xdrln" (UID: "43d55b3f-3bd8-4083-9e0d-f398938a47e6") : secret "infra-operator-webhook-server-cert" not found Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.558678 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z"] Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.567231 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-8whbf"] Jan 27 13:24:32 crc kubenswrapper[4786]: W0127 13:24:32.572951 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7732c732_60c5_476d_bf01_ed83c38b4d35.slice/crio-28398f239cc9e8e32e8f27237097bea5f863afdcb6d3cedc94f06753183d757d WatchSource:0}: Error finding container 28398f239cc9e8e32e8f27237097bea5f863afdcb6d3cedc94f06753183d757d: Status 404 returned error can't find the container with id 28398f239cc9e8e32e8f27237097bea5f863afdcb6d3cedc94f06753183d757d Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.574987 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-ghr9t"] Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.581258 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5"] Jan 27 13:24:32 crc kubenswrapper[4786]: W0127 13:24:32.588081 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d8c2e75_6d30_4c9f_9f3f_82bda48f6d43.slice/crio-b7fbe6e64c168fb73cdff58a4c58439c1688d928816252643b3db07ef4ebb6f5 WatchSource:0}: Error finding container b7fbe6e64c168fb73cdff58a4c58439c1688d928816252643b3db07ef4ebb6f5: Status 404 returned error can't find the container with id b7fbe6e64c168fb73cdff58a4c58439c1688d928816252643b3db07ef4ebb6f5 Jan 27 13:24:32 crc kubenswrapper[4786]: W0127 13:24:32.588862 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88c4fa6a_bb1a_46fe_a863_473b9ec66ce7.slice/crio-1eacffd5fa26fd2df0a21e1c4d10b89db4f582bc639ef07407e03436bcb131cb WatchSource:0}: Error finding container 1eacffd5fa26fd2df0a21e1c4d10b89db4f582bc639ef07407e03436bcb131cb: Status 404 returned error can't find the container with id 1eacffd5fa26fd2df0a21e1c4d10b89db4f582bc639ef07407e03436bcb131cb Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.588874 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fkbbc"] Jan 27 13:24:32 crc kubenswrapper[4786]: W0127 13:24:32.608285 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod250d0fff_09d2_49be_94a3_6eefdd3aab06.slice/crio-d3ed35dea0e0b58aa84a877b3b669f29d3bba42b682299dbf4f72521bec0802b WatchSource:0}: Error finding container d3ed35dea0e0b58aa84a877b3b669f29d3bba42b682299dbf4f72521bec0802b: Status 404 returned error can't find the container with id d3ed35dea0e0b58aa84a877b3b669f29d3bba42b682299dbf4f72521bec0802b Jan 27 13:24:32 crc kubenswrapper[4786]: W0127 13:24:32.613224 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39341414_eb82_400d_96ce_e546dd32d15b.slice/crio-e97a71e708e3861d9601554f34cfd53f55fea0cf554a570ab7f2a23b4665c936 WatchSource:0}: Error finding container e97a71e708e3861d9601554f34cfd53f55fea0cf554a570ab7f2a23b4665c936: Status 404 returned error can't find the container with id e97a71e708e3861d9601554f34cfd53f55fea0cf554a570ab7f2a23b4665c936 Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.616530 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tgxcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-fkbbc_openstack-operators(39341414-eb82-400d-96ce-e546dd32d15b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.617680 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fkbbc" podUID="39341414-eb82-400d-96ce-e546dd32d15b" Jan 27 13:24:32 crc kubenswrapper[4786]: W0127 13:24:32.729577 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd3a1177_720b_4e0d_83d9_9ea046369690.slice/crio-33a130274d359fe74d496327baf5122d57cea288426e3d924e47879018bfc211 WatchSource:0}: Error finding container 33a130274d359fe74d496327baf5122d57cea288426e3d924e47879018bfc211: Status 404 returned error can't find the container with id 33a130274d359fe74d496327baf5122d57cea288426e3d924e47879018bfc211 Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.756146 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-cv96v"] Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.763129 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-km9zd"] Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.763954 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wnddg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-q7glc_openstack-operators(f7bba046-60b2-4fa4-96ec-976f73b1ff7c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.764091 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-q7glc"] Jan 27 13:24:32 crc kubenswrapper[4786]: W0127 13:24:32.768594 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42803b12_da36_48df_b9bb_ed3d4555b7b4.slice/crio-84bee6625a2e69645be448096a516b8a2ee6d56f2e42f38bbaf73b6bc87ef21e WatchSource:0}: Error finding container 84bee6625a2e69645be448096a516b8a2ee6d56f2e42f38bbaf73b6bc87ef21e: Status 404 returned error can't find the container with id 84bee6625a2e69645be448096a516b8a2ee6d56f2e42f38bbaf73b6bc87ef21e Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.769291 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-q7glc" podUID="f7bba046-60b2-4fa4-96ec-976f73b1ff7c" Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.769564 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-qmp6d"] Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.775870 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jtxdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-cv96v_openstack-operators(42803b12-da36-48df-b9bb-ed3d4555b7b4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.777546 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cv96v" podUID="42803b12-da36-48df-b9bb-ed3d4555b7b4" Jan 27 13:24:32 crc kubenswrapper[4786]: W0127 13:24:32.798773 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d2d7f2c_4522_45bf_a12d_1eb7cc11041e.slice/crio-a6322c1e041969e0972811abbcba4daa16e31f1954faa9cd74ac8c419185ec00 WatchSource:0}: Error finding container a6322c1e041969e0972811abbcba4daa16e31f1954faa9cd74ac8c419185ec00: Status 404 returned error can't find the container with id a6322c1e041969e0972811abbcba4daa16e31f1954faa9cd74ac8c419185ec00 Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.808937 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6k5xj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-qmp6d_openstack-operators(9d2d7f2c-4522-45bf-a12d-1eb7cc11041e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.810133 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qmp6d" podUID="9d2d7f2c-4522-45bf-a12d-1eb7cc11041e" Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.898359 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fkbbc" event={"ID":"39341414-eb82-400d-96ce-e546dd32d15b","Type":"ContainerStarted","Data":"e97a71e708e3861d9601554f34cfd53f55fea0cf554a570ab7f2a23b4665c936"} Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.900301 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fkbbc" podUID="39341414-eb82-400d-96ce-e546dd32d15b" Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.901484 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnmcp" event={"ID":"7e2710fc-7453-40a9-81c0-ccec15d86a77","Type":"ContainerStarted","Data":"4870618a25c6835ec5c045068115251144f6bb634ac019bf5941519483451bc5"} Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.903473 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" event={"ID":"1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43","Type":"ContainerStarted","Data":"b7fbe6e64c168fb73cdff58a4c58439c1688d928816252643b3db07ef4ebb6f5"} Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.905077 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qmp6d" event={"ID":"9d2d7f2c-4522-45bf-a12d-1eb7cc11041e","Type":"ContainerStarted","Data":"a6322c1e041969e0972811abbcba4daa16e31f1954faa9cd74ac8c419185ec00"} Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.906319 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qmp6d" podUID="9d2d7f2c-4522-45bf-a12d-1eb7cc11041e" Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.907302 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8whbf" event={"ID":"7732c732-60c5-476d-bf01-ed83c38b4d35","Type":"ContainerStarted","Data":"28398f239cc9e8e32e8f27237097bea5f863afdcb6d3cedc94f06753183d757d"} Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.909831 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-pb5lk" event={"ID":"5c18c2c6-04e6-4b87-b92d-586823b20ac1","Type":"ContainerStarted","Data":"782e4270cce223cf8da2bf7944fe8751d6310d238b9e147d35b34afbd1045059"} Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.916550 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-km9zd" event={"ID":"fd3a1177-720b-4e0d-83d9-9ea046369690","Type":"ContainerStarted","Data":"33a130274d359fe74d496327baf5122d57cea288426e3d924e47879018bfc211"} Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.936207 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-w6r6x" event={"ID":"84881c35-4d84-4c91-b401-0bf1d7de9314","Type":"ContainerStarted","Data":"f4b4cde78be7675b4c33b2d231fc177740f812e93112b17d1e0e1a8f1ebd1713"} Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.938546 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m75zn" event={"ID":"1eed4ae8-357f-4388-a9d3-9382b0fc84ec","Type":"ContainerStarted","Data":"8a77856ffe78d0ca104385d1c0c798da63bcf6d4a13633949f6e8a1c5905b418"} Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.940942 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6p8s9" event={"ID":"5614e239-8fc3-4091-aad4-55a217ca1092","Type":"ContainerStarted","Data":"37d1f00ab8c2f38ee1fa7c1e6d62c9ae60678d7d6e61ec6dfce4d390ef8ec2e0"} Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.942964 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ghr9t" event={"ID":"250d0fff-09d2-49be-94a3-6eefdd3aab06","Type":"ContainerStarted","Data":"d3ed35dea0e0b58aa84a877b3b669f29d3bba42b682299dbf4f72521bec0802b"} Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.944738 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-q7glc" event={"ID":"f7bba046-60b2-4fa4-96ec-976f73b1ff7c","Type":"ContainerStarted","Data":"890f3c767362583b3d58f529abbeca514a57cc785e709dc102e7f13615c12d49"} Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.945956 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z" event={"ID":"88c4fa6a-bb1a-46fe-a863-473b9ec66ce7","Type":"ContainerStarted","Data":"1eacffd5fa26fd2df0a21e1c4d10b89db4f582bc639ef07407e03436bcb131cb"} Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.947491 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-q7glc" podUID="f7bba046-60b2-4fa4-96ec-976f73b1ff7c" Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.955490 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cv96v" event={"ID":"42803b12-da36-48df-b9bb-ed3d4555b7b4","Type":"ContainerStarted","Data":"84bee6625a2e69645be448096a516b8a2ee6d56f2e42f38bbaf73b6bc87ef21e"} Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.958755 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lspl5" event={"ID":"cfc6eb47-18a2-442a-a1d8-ddec61462156","Type":"ContainerStarted","Data":"cab6f6ceff2c7f06e8af00ff4620ba2c758c4611952a3a87ef19c8111feac62b"} Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.959307 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cv96v" podUID="42803b12-da36-48df-b9bb-ed3d4555b7b4" Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.961153 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt\" (UID: \"45599804-69cd-44f0-bb76-a15e5a3ff700\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.961701 4786 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 13:24:32 crc kubenswrapper[4786]: E0127 13:24:32.961877 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert podName:45599804-69cd-44f0-bb76-a15e5a3ff700 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:34.961853946 +0000 UTC m=+1058.172468075 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" (UID: "45599804-69cd-44f0-bb76-a15e5a3ff700") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 13:24:32 crc kubenswrapper[4786]: I0127 13:24:32.967771 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-c2c8q" event={"ID":"4058f919-d5c7-4f73-9c8a-432409f9022a","Type":"ContainerStarted","Data":"a7dc7ec848a6ade4e4913186226dda6c76d3b6a022cffdf12ea98c1a910ea1b2"} Jan 27 13:24:33 crc kubenswrapper[4786]: I0127 13:24:33.467496 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:33 crc kubenswrapper[4786]: I0127 13:24:33.467643 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:33 crc kubenswrapper[4786]: E0127 13:24:33.467890 4786 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 13:24:33 crc kubenswrapper[4786]: E0127 13:24:33.467953 4786 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 13:24:33 crc kubenswrapper[4786]: E0127 13:24:33.467981 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs podName:d7f033ce-43f9-425f-a74c-65735b66f5b8 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:35.467964042 +0000 UTC m=+1058.678578151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs") pod "openstack-operator-controller-manager-596b94879d-2gz6v" (UID: "d7f033ce-43f9-425f-a74c-65735b66f5b8") : secret "webhook-server-cert" not found Jan 27 13:24:33 crc kubenswrapper[4786]: E0127 13:24:33.468019 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs podName:d7f033ce-43f9-425f-a74c-65735b66f5b8 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:35.468001683 +0000 UTC m=+1058.678615882 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs") pod "openstack-operator-controller-manager-596b94879d-2gz6v" (UID: "d7f033ce-43f9-425f-a74c-65735b66f5b8") : secret "metrics-server-cert" not found Jan 27 13:24:33 crc kubenswrapper[4786]: E0127 13:24:33.986665 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qmp6d" podUID="9d2d7f2c-4522-45bf-a12d-1eb7cc11041e" Jan 27 13:24:33 crc kubenswrapper[4786]: E0127 13:24:33.987720 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-q7glc" podUID="f7bba046-60b2-4fa4-96ec-976f73b1ff7c" Jan 27 13:24:33 crc kubenswrapper[4786]: E0127 13:24:33.992866 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fkbbc" podUID="39341414-eb82-400d-96ce-e546dd32d15b" Jan 27 13:24:33 crc kubenswrapper[4786]: E0127 13:24:33.992949 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cv96v" podUID="42803b12-da36-48df-b9bb-ed3d4555b7b4" Jan 27 13:24:34 crc kubenswrapper[4786]: I0127 13:24:34.586873 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert\") pod \"infra-operator-controller-manager-694cf4f878-xdrln\" (UID: \"43d55b3f-3bd8-4083-9e0d-f398938a47e6\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" Jan 27 13:24:34 crc kubenswrapper[4786]: E0127 13:24:34.587516 4786 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 13:24:34 crc kubenswrapper[4786]: E0127 13:24:34.587617 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert podName:43d55b3f-3bd8-4083-9e0d-f398938a47e6 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:38.587581678 +0000 UTC m=+1061.798195837 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert") pod "infra-operator-controller-manager-694cf4f878-xdrln" (UID: "43d55b3f-3bd8-4083-9e0d-f398938a47e6") : secret "infra-operator-webhook-server-cert" not found Jan 27 13:24:34 crc kubenswrapper[4786]: I0127 13:24:34.993004 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt\" (UID: \"45599804-69cd-44f0-bb76-a15e5a3ff700\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" Jan 27 13:24:34 crc kubenswrapper[4786]: E0127 13:24:34.993293 4786 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 13:24:34 crc kubenswrapper[4786]: E0127 13:24:34.993581 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert podName:45599804-69cd-44f0-bb76-a15e5a3ff700 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:38.993553939 +0000 UTC m=+1062.204168058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" (UID: "45599804-69cd-44f0-bb76-a15e5a3ff700") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 13:24:35 crc kubenswrapper[4786]: I0127 13:24:35.500307 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:35 crc kubenswrapper[4786]: I0127 13:24:35.500350 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:35 crc kubenswrapper[4786]: E0127 13:24:35.500470 4786 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 13:24:35 crc kubenswrapper[4786]: E0127 13:24:35.500494 4786 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 13:24:35 crc kubenswrapper[4786]: E0127 13:24:35.500544 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs podName:d7f033ce-43f9-425f-a74c-65735b66f5b8 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:39.500526278 +0000 UTC m=+1062.711140397 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs") pod "openstack-operator-controller-manager-596b94879d-2gz6v" (UID: "d7f033ce-43f9-425f-a74c-65735b66f5b8") : secret "webhook-server-cert" not found Jan 27 13:24:35 crc kubenswrapper[4786]: E0127 13:24:35.500561 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs podName:d7f033ce-43f9-425f-a74c-65735b66f5b8 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:39.500554289 +0000 UTC m=+1062.711168408 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs") pod "openstack-operator-controller-manager-596b94879d-2gz6v" (UID: "d7f033ce-43f9-425f-a74c-65735b66f5b8") : secret "metrics-server-cert" not found Jan 27 13:24:38 crc kubenswrapper[4786]: I0127 13:24:38.645180 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert\") pod \"infra-operator-controller-manager-694cf4f878-xdrln\" (UID: \"43d55b3f-3bd8-4083-9e0d-f398938a47e6\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" Jan 27 13:24:38 crc kubenswrapper[4786]: E0127 13:24:38.645738 4786 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 13:24:38 crc kubenswrapper[4786]: E0127 13:24:38.645796 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert podName:43d55b3f-3bd8-4083-9e0d-f398938a47e6 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:46.645777381 +0000 UTC m=+1069.856391500 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert") pod "infra-operator-controller-manager-694cf4f878-xdrln" (UID: "43d55b3f-3bd8-4083-9e0d-f398938a47e6") : secret "infra-operator-webhook-server-cert" not found Jan 27 13:24:39 crc kubenswrapper[4786]: I0127 13:24:39.051279 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt\" (UID: \"45599804-69cd-44f0-bb76-a15e5a3ff700\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" Jan 27 13:24:39 crc kubenswrapper[4786]: E0127 13:24:39.051399 4786 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 13:24:39 crc kubenswrapper[4786]: E0127 13:24:39.051461 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert podName:45599804-69cd-44f0-bb76-a15e5a3ff700 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:47.051442864 +0000 UTC m=+1070.262056983 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" (UID: "45599804-69cd-44f0-bb76-a15e5a3ff700") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 13:24:39 crc kubenswrapper[4786]: I0127 13:24:39.557507 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:39 crc kubenswrapper[4786]: I0127 13:24:39.557556 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:39 crc kubenswrapper[4786]: E0127 13:24:39.557762 4786 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 13:24:39 crc kubenswrapper[4786]: E0127 13:24:39.557832 4786 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 13:24:39 crc kubenswrapper[4786]: E0127 13:24:39.557842 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs podName:d7f033ce-43f9-425f-a74c-65735b66f5b8 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:47.557818427 +0000 UTC m=+1070.768432606 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs") pod "openstack-operator-controller-manager-596b94879d-2gz6v" (UID: "d7f033ce-43f9-425f-a74c-65735b66f5b8") : secret "webhook-server-cert" not found Jan 27 13:24:39 crc kubenswrapper[4786]: E0127 13:24:39.557910 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs podName:d7f033ce-43f9-425f-a74c-65735b66f5b8 nodeName:}" failed. No retries permitted until 2026-01-27 13:24:47.557891649 +0000 UTC m=+1070.768505868 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs") pod "openstack-operator-controller-manager-596b94879d-2gz6v" (UID: "d7f033ce-43f9-425f-a74c-65735b66f5b8") : secret "metrics-server-cert" not found Jan 27 13:24:44 crc kubenswrapper[4786]: E0127 13:24:44.857747 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 27 13:24:44 crc kubenswrapper[4786]: E0127 13:24:44.858467 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-962v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-c2c8q_openstack-operators(4058f919-d5c7-4f73-9c8a-432409f9022a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 13:24:44 crc kubenswrapper[4786]: E0127 13:24:44.859814 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-c2c8q" podUID="4058f919-d5c7-4f73-9c8a-432409f9022a" Jan 27 13:24:45 crc kubenswrapper[4786]: E0127 13:24:45.070438 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-c2c8q" podUID="4058f919-d5c7-4f73-9c8a-432409f9022a" Jan 27 13:24:45 crc kubenswrapper[4786]: E0127 13:24:45.355473 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 27 13:24:45 crc kubenswrapper[4786]: E0127 13:24:45.355691 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nf76b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-lspl5_openstack-operators(cfc6eb47-18a2-442a-a1d8-ddec61462156): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 13:24:45 crc kubenswrapper[4786]: E0127 13:24:45.357170 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lspl5" podUID="cfc6eb47-18a2-442a-a1d8-ddec61462156" Jan 27 13:24:45 crc kubenswrapper[4786]: E0127 13:24:45.835350 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 27 13:24:45 crc kubenswrapper[4786]: E0127 13:24:45.835511 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2mj9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-8whbf_openstack-operators(7732c732-60c5-476d-bf01-ed83c38b4d35): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 13:24:45 crc kubenswrapper[4786]: E0127 13:24:45.836693 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8whbf" podUID="7732c732-60c5-476d-bf01-ed83c38b4d35" Jan 27 13:24:46 crc kubenswrapper[4786]: E0127 13:24:46.073890 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8whbf" podUID="7732c732-60c5-476d-bf01-ed83c38b4d35" Jan 27 13:24:46 crc kubenswrapper[4786]: E0127 13:24:46.074168 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lspl5" podUID="cfc6eb47-18a2-442a-a1d8-ddec61462156" Jan 27 13:24:46 crc kubenswrapper[4786]: I0127 13:24:46.466356 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 13:24:46 crc kubenswrapper[4786]: I0127 13:24:46.734167 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert\") pod \"infra-operator-controller-manager-694cf4f878-xdrln\" (UID: \"43d55b3f-3bd8-4083-9e0d-f398938a47e6\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" Jan 27 13:24:46 crc kubenswrapper[4786]: I0127 13:24:46.740922 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43d55b3f-3bd8-4083-9e0d-f398938a47e6-cert\") pod \"infra-operator-controller-manager-694cf4f878-xdrln\" (UID: \"43d55b3f-3bd8-4083-9e0d-f398938a47e6\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" Jan 27 13:24:46 crc kubenswrapper[4786]: I0127 13:24:46.796259 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" Jan 27 13:24:47 crc kubenswrapper[4786]: I0127 13:24:47.139345 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt\" (UID: \"45599804-69cd-44f0-bb76-a15e5a3ff700\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" Jan 27 13:24:47 crc kubenswrapper[4786]: I0127 13:24:47.142854 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45599804-69cd-44f0-bb76-a15e5a3ff700-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt\" (UID: \"45599804-69cd-44f0-bb76-a15e5a3ff700\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" Jan 27 13:24:47 crc kubenswrapper[4786]: I0127 13:24:47.182049 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" Jan 27 13:24:47 crc kubenswrapper[4786]: I0127 13:24:47.688167 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:47 crc kubenswrapper[4786]: I0127 13:24:47.688616 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:47 crc kubenswrapper[4786]: E0127 13:24:47.688356 4786 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 13:24:47 crc kubenswrapper[4786]: E0127 13:24:47.688761 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs podName:d7f033ce-43f9-425f-a74c-65735b66f5b8 nodeName:}" failed. No retries permitted until 2026-01-27 13:25:03.688732441 +0000 UTC m=+1086.899346570 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs") pod "openstack-operator-controller-manager-596b94879d-2gz6v" (UID: "d7f033ce-43f9-425f-a74c-65735b66f5b8") : secret "webhook-server-cert" not found Jan 27 13:24:47 crc kubenswrapper[4786]: I0127 13:24:47.692646 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-metrics-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:24:48 crc kubenswrapper[4786]: E0127 13:24:48.349693 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 27 13:24:48 crc kubenswrapper[4786]: E0127 13:24:48.349918 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dnv2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-km9zd_openstack-operators(fd3a1177-720b-4e0d-83d9-9ea046369690): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 13:24:48 crc kubenswrapper[4786]: E0127 13:24:48.351159 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-km9zd" podUID="fd3a1177-720b-4e0d-83d9-9ea046369690" Jan 27 13:24:49 crc kubenswrapper[4786]: E0127 13:24:49.094185 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-km9zd" podUID="fd3a1177-720b-4e0d-83d9-9ea046369690" Jan 27 13:24:49 crc kubenswrapper[4786]: E0127 13:24:49.692172 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.113:5001/openstack-k8s-operators/nova-operator:6959a83da59d7352f8912740b69045ee5ad9ce8c" Jan 27 13:24:49 crc kubenswrapper[4786]: E0127 13:24:49.692239 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.113:5001/openstack-k8s-operators/nova-operator:6959a83da59d7352f8912740b69045ee5ad9ce8c" Jan 27 13:24:49 crc kubenswrapper[4786]: E0127 13:24:49.693303 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.113:5001/openstack-k8s-operators/nova-operator:6959a83da59d7352f8912740b69045ee5ad9ce8c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nqzlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6cffd64fd8-cgnm5_openstack-operators(1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 13:24:49 crc kubenswrapper[4786]: E0127 13:24:49.694470 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" podUID="1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43" Jan 27 13:24:50 crc kubenswrapper[4786]: E0127 13:24:50.098515 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.113:5001/openstack-k8s-operators/nova-operator:6959a83da59d7352f8912740b69045ee5ad9ce8c\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" podUID="1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43" Jan 27 13:24:50 crc kubenswrapper[4786]: E0127 13:24:50.270309 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 27 13:24:50 crc kubenswrapper[4786]: E0127 13:24:50.270492 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hw7xd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-6p8s9_openstack-operators(5614e239-8fc3-4091-aad4-55a217ca1092): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 13:24:50 crc kubenswrapper[4786]: E0127 13:24:50.271715 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6p8s9" podUID="5614e239-8fc3-4091-aad4-55a217ca1092" Jan 27 13:24:51 crc kubenswrapper[4786]: E0127 13:24:51.104933 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6p8s9" podUID="5614e239-8fc3-4091-aad4-55a217ca1092" Jan 27 13:24:53 crc kubenswrapper[4786]: I0127 13:24:53.029143 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt"] Jan 27 13:24:53 crc kubenswrapper[4786]: I0127 13:24:53.107587 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln"] Jan 27 13:24:53 crc kubenswrapper[4786]: W0127 13:24:53.391881 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45599804_69cd_44f0_bb76_a15e5a3ff700.slice/crio-412ae3b4c0a955dd5316efe142a8543a3a731426fc8723823d6b468dd07e6749 WatchSource:0}: Error finding container 412ae3b4c0a955dd5316efe142a8543a3a731426fc8723823d6b468dd07e6749: Status 404 returned error can't find the container with id 412ae3b4c0a955dd5316efe142a8543a3a731426fc8723823d6b468dd07e6749 Jan 27 13:24:53 crc kubenswrapper[4786]: W0127 13:24:53.393010 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43d55b3f_3bd8_4083_9e0d_f398938a47e6.slice/crio-a23d242b2be0e396fa9e9e39a3126760f42742b44c72b4089f4f624c2b27e51c WatchSource:0}: Error finding container a23d242b2be0e396fa9e9e39a3126760f42742b44c72b4089f4f624c2b27e51c: Status 404 returned error can't find the container with id a23d242b2be0e396fa9e9e39a3126760f42742b44c72b4089f4f624c2b27e51c Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.142083 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fkbbc" event={"ID":"39341414-eb82-400d-96ce-e546dd32d15b","Type":"ContainerStarted","Data":"45ef1bee9c94f260850ec49011c74c1987f3f7ec3929304d53552484178a64ec"} Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.143501 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fkbbc" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.144234 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-cpn89" event={"ID":"37986633-7f55-41aa-b83d-7f74a5640f2f","Type":"ContainerStarted","Data":"776f60af26bc2cff476486a7656d2528085375f51bd284fa27f3c6f952030ab7"} Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.144294 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-cpn89" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.147302 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnmcp" event={"ID":"7e2710fc-7453-40a9-81c0-ccec15d86a77","Type":"ContainerStarted","Data":"5647de96120bb818ac5297506b056e4b254eca13e6991f0b301ea7529fe5c64e"} Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.148191 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnmcp" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.157544 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ghr9t" event={"ID":"250d0fff-09d2-49be-94a3-6eefdd3aab06","Type":"ContainerStarted","Data":"10ff5bb63e84ad0701d742873d726fd2774bb5e0aecb05a40e24a0d68d70c69d"} Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.158243 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ghr9t" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.172023 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fwlxc" event={"ID":"e607a576-31c5-4ef7-82ea-66851b5a33d2","Type":"ContainerStarted","Data":"aa0bd69913f601fc1e169594d7ce81835c2746d47c9a2e5693d9df6acba432ce"} Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.172728 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fwlxc" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.189121 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fkbbc" podStartSLOduration=3.310023826 podStartE2EDuration="24.189099491s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:32.616389831 +0000 UTC m=+1055.827003950" lastFinishedPulling="2026-01-27 13:24:53.495465496 +0000 UTC m=+1076.706079615" observedRunningTime="2026-01-27 13:24:54.184420483 +0000 UTC m=+1077.395034602" watchObservedRunningTime="2026-01-27 13:24:54.189099491 +0000 UTC m=+1077.399713610" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.210523 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z" event={"ID":"88c4fa6a-bb1a-46fe-a863-473b9ec66ce7","Type":"ContainerStarted","Data":"96d903c6c3a84f0c3e1bc0c6371d61de24a546f3ec6e6f987e15b7d26e9b7ff0"} Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.210658 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.212052 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnmcp" podStartSLOduration=8.020408584 podStartE2EDuration="24.212037503s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:32.050831929 +0000 UTC m=+1055.261446038" lastFinishedPulling="2026-01-27 13:24:48.242460838 +0000 UTC m=+1071.453074957" observedRunningTime="2026-01-27 13:24:54.205643156 +0000 UTC m=+1077.416257285" watchObservedRunningTime="2026-01-27 13:24:54.212037503 +0000 UTC m=+1077.422651622" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.244076 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" event={"ID":"43d55b3f-3bd8-4083-9e0d-f398938a47e6","Type":"ContainerStarted","Data":"a23d242b2be0e396fa9e9e39a3126760f42742b44c72b4089f4f624c2b27e51c"} Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.251283 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ghr9t" podStartSLOduration=8.620124346 podStartE2EDuration="24.251262982s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:32.61090368 +0000 UTC m=+1055.821517799" lastFinishedPulling="2026-01-27 13:24:48.242042316 +0000 UTC m=+1071.452656435" observedRunningTime="2026-01-27 13:24:54.223719614 +0000 UTC m=+1077.434333763" watchObservedRunningTime="2026-01-27 13:24:54.251262982 +0000 UTC m=+1077.461877101" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.267889 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cv96v" event={"ID":"42803b12-da36-48df-b9bb-ed3d4555b7b4","Type":"ContainerStarted","Data":"b059fc3cfe861284d40e8e38c0c8398b31cc7ca4f46a2e2dc470cffd6b0b5cd2"} Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.268793 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cv96v" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.277477 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fwlxc" podStartSLOduration=8.316579263 podStartE2EDuration="24.277454692s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:31.808800079 +0000 UTC m=+1055.019414188" lastFinishedPulling="2026-01-27 13:24:47.769675498 +0000 UTC m=+1070.980289617" observedRunningTime="2026-01-27 13:24:54.262284055 +0000 UTC m=+1077.472898174" watchObservedRunningTime="2026-01-27 13:24:54.277454692 +0000 UTC m=+1077.488068811" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.287898 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" event={"ID":"45599804-69cd-44f0-bb76-a15e5a3ff700","Type":"ContainerStarted","Data":"412ae3b4c0a955dd5316efe142a8543a3a731426fc8723823d6b468dd07e6749"} Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.314186 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m75zn" event={"ID":"1eed4ae8-357f-4388-a9d3-9382b0fc84ec","Type":"ContainerStarted","Data":"c50ea26c63c18c3b9124d10afca81cde5f10c1f0dbc6ff17de023ac58fd6c25d"} Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.314406 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m75zn" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.323097 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-cpn89" podStartSLOduration=6.552828891 podStartE2EDuration="24.323078327s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:31.495133598 +0000 UTC m=+1054.705747717" lastFinishedPulling="2026-01-27 13:24:49.265383024 +0000 UTC m=+1072.475997153" observedRunningTime="2026-01-27 13:24:54.310232284 +0000 UTC m=+1077.520846403" watchObservedRunningTime="2026-01-27 13:24:54.323078327 +0000 UTC m=+1077.533692446" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.340957 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z" podStartSLOduration=8.697019391 podStartE2EDuration="24.340941639s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:32.598383415 +0000 UTC m=+1055.808997534" lastFinishedPulling="2026-01-27 13:24:48.242305663 +0000 UTC m=+1071.452919782" observedRunningTime="2026-01-27 13:24:54.340091115 +0000 UTC m=+1077.550705234" watchObservedRunningTime="2026-01-27 13:24:54.340941639 +0000 UTC m=+1077.551555758" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.345298 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-w6r6x" event={"ID":"84881c35-4d84-4c91-b401-0bf1d7de9314","Type":"ContainerStarted","Data":"d427974fc50a6a342f5cf544764eeaed9e58aeab17567946cd0d8b760574a2c1"} Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.345983 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-w6r6x" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.371232 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-pb5lk" event={"ID":"5c18c2c6-04e6-4b87-b92d-586823b20ac1","Type":"ContainerStarted","Data":"4a0e1dcb4b2862778e50dfbd55a8428086e16bb82cda2f05962e0f9a676bf9e1"} Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.371335 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cv96v" podStartSLOduration=3.586833033 podStartE2EDuration="24.371312555s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:32.77554931 +0000 UTC m=+1055.986163429" lastFinishedPulling="2026-01-27 13:24:53.560028832 +0000 UTC m=+1076.770642951" observedRunningTime="2026-01-27 13:24:54.367726116 +0000 UTC m=+1077.578340235" watchObservedRunningTime="2026-01-27 13:24:54.371312555 +0000 UTC m=+1077.581926674" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.372006 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-pb5lk" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.391381 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5mpdp" event={"ID":"a5ebc5e9-fc27-4326-927a-c791f35a71e9","Type":"ContainerStarted","Data":"67e89dd07593091310b06a235e940648ede8d3b9bb2bb384901c48b39d962aa1"} Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.392008 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5mpdp" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.412428 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-w6r6x" podStartSLOduration=7.078199158 podStartE2EDuration="24.412402755s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:31.930971791 +0000 UTC m=+1055.141585910" lastFinishedPulling="2026-01-27 13:24:49.265175388 +0000 UTC m=+1072.475789507" observedRunningTime="2026-01-27 13:24:54.412027195 +0000 UTC m=+1077.622641314" watchObservedRunningTime="2026-01-27 13:24:54.412402755 +0000 UTC m=+1077.623016874" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.444281 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m75zn" podStartSLOduration=7.558540895 podStartE2EDuration="24.444261602s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:32.379623556 +0000 UTC m=+1055.590237675" lastFinishedPulling="2026-01-27 13:24:49.265344263 +0000 UTC m=+1072.475958382" observedRunningTime="2026-01-27 13:24:54.439306286 +0000 UTC m=+1077.649920405" watchObservedRunningTime="2026-01-27 13:24:54.444261602 +0000 UTC m=+1077.654875721" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.521460 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-pb5lk" podStartSLOduration=8.67226698 podStartE2EDuration="24.521441276s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:32.39286009 +0000 UTC m=+1055.603474209" lastFinishedPulling="2026-01-27 13:24:48.242034386 +0000 UTC m=+1071.452648505" observedRunningTime="2026-01-27 13:24:54.519471371 +0000 UTC m=+1077.730085490" watchObservedRunningTime="2026-01-27 13:24:54.521441276 +0000 UTC m=+1077.732055395" Jan 27 13:24:54 crc kubenswrapper[4786]: I0127 13:24:54.522551 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5mpdp" podStartSLOduration=7.988654311 podStartE2EDuration="24.522542316s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:31.708598443 +0000 UTC m=+1054.919212562" lastFinishedPulling="2026-01-27 13:24:48.242486448 +0000 UTC m=+1071.453100567" observedRunningTime="2026-01-27 13:24:54.474400871 +0000 UTC m=+1077.685015000" watchObservedRunningTime="2026-01-27 13:24:54.522542316 +0000 UTC m=+1077.733156435" Jan 27 13:24:55 crc kubenswrapper[4786]: I0127 13:24:55.402527 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-q7glc" event={"ID":"f7bba046-60b2-4fa4-96ec-976f73b1ff7c","Type":"ContainerStarted","Data":"d4c61353c029e977b241dc46b73e9928a0f91a304778ae58a30f860046377ac7"} Jan 27 13:24:55 crc kubenswrapper[4786]: I0127 13:24:55.403049 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-q7glc" Jan 27 13:24:55 crc kubenswrapper[4786]: I0127 13:24:55.404283 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qmp6d" event={"ID":"9d2d7f2c-4522-45bf-a12d-1eb7cc11041e","Type":"ContainerStarted","Data":"8126f35d6822fb38d56ac00e0939dbfdd107c73a25ed0851eb4b6fd6bfa1aba1"} Jan 27 13:24:55 crc kubenswrapper[4786]: I0127 13:24:55.404707 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qmp6d" Jan 27 13:24:55 crc kubenswrapper[4786]: I0127 13:24:55.408536 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-z5xnb" event={"ID":"4a240cd4-c49f-4716-80bb-6d1ba632a32c","Type":"ContainerStarted","Data":"cd4b9db7572bcb6d94d12e4e572aa1ea65908e97b0670483c16274e854af857c"} Jan 27 13:24:55 crc kubenswrapper[4786]: I0127 13:24:55.424378 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-q7glc" podStartSLOduration=3.69926423 podStartE2EDuration="24.424361549s" podCreationTimestamp="2026-01-27 13:24:31 +0000 UTC" firstStartedPulling="2026-01-27 13:24:32.763796586 +0000 UTC m=+1055.974410705" lastFinishedPulling="2026-01-27 13:24:53.488893905 +0000 UTC m=+1076.699508024" observedRunningTime="2026-01-27 13:24:55.419681511 +0000 UTC m=+1078.630295630" watchObservedRunningTime="2026-01-27 13:24:55.424361549 +0000 UTC m=+1078.634975668" Jan 27 13:24:55 crc kubenswrapper[4786]: I0127 13:24:55.440150 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-z5xnb" podStartSLOduration=6.990346902 podStartE2EDuration="25.440130814s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:31.78300759 +0000 UTC m=+1054.993621709" lastFinishedPulling="2026-01-27 13:24:50.232791502 +0000 UTC m=+1073.443405621" observedRunningTime="2026-01-27 13:24:55.432571965 +0000 UTC m=+1078.643186074" watchObservedRunningTime="2026-01-27 13:24:55.440130814 +0000 UTC m=+1078.650744933" Jan 27 13:24:55 crc kubenswrapper[4786]: I0127 13:24:55.455510 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qmp6d" podStartSLOduration=4.797739951 podStartE2EDuration="25.455488017s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:32.808741463 +0000 UTC m=+1056.019355582" lastFinishedPulling="2026-01-27 13:24:53.466489539 +0000 UTC m=+1076.677103648" observedRunningTime="2026-01-27 13:24:55.447783655 +0000 UTC m=+1078.658397794" watchObservedRunningTime="2026-01-27 13:24:55.455488017 +0000 UTC m=+1078.666102136" Jan 27 13:24:56 crc kubenswrapper[4786]: I0127 13:24:56.416501 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-z5xnb" Jan 27 13:25:00 crc kubenswrapper[4786]: I0127 13:25:00.884641 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-5mpdp" Jan 27 13:25:00 crc kubenswrapper[4786]: I0127 13:25:00.942295 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-cpn89" Jan 27 13:25:00 crc kubenswrapper[4786]: I0127 13:25:00.952973 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-w6r6x" Jan 27 13:25:00 crc kubenswrapper[4786]: I0127 13:25:00.996890 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fwlxc" Jan 27 13:25:01 crc kubenswrapper[4786]: I0127 13:25:01.045028 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-z5xnb" Jan 27 13:25:01 crc kubenswrapper[4786]: I0127 13:25:01.112577 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnmcp" Jan 27 13:25:01 crc kubenswrapper[4786]: I0127 13:25:01.374359 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z" Jan 27 13:25:01 crc kubenswrapper[4786]: I0127 13:25:01.384321 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m75zn" Jan 27 13:25:01 crc kubenswrapper[4786]: I0127 13:25:01.417140 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-pb5lk" Jan 27 13:25:01 crc kubenswrapper[4786]: I0127 13:25:01.537824 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-cv96v" Jan 27 13:25:01 crc kubenswrapper[4786]: I0127 13:25:01.556870 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ghr9t" Jan 27 13:25:01 crc kubenswrapper[4786]: I0127 13:25:01.628121 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qmp6d" Jan 27 13:25:01 crc kubenswrapper[4786]: I0127 13:25:01.674128 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-fkbbc" Jan 27 13:25:01 crc kubenswrapper[4786]: I0127 13:25:01.952291 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-q7glc" Jan 27 13:25:03 crc kubenswrapper[4786]: I0127 13:25:03.728736 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:25:03 crc kubenswrapper[4786]: I0127 13:25:03.734350 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d7f033ce-43f9-425f-a74c-65735b66f5b8-webhook-certs\") pod \"openstack-operator-controller-manager-596b94879d-2gz6v\" (UID: \"d7f033ce-43f9-425f-a74c-65735b66f5b8\") " pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:25:03 crc kubenswrapper[4786]: I0127 13:25:03.749700 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-w66v8" Jan 27 13:25:03 crc kubenswrapper[4786]: I0127 13:25:03.758917 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:25:12 crc kubenswrapper[4786]: E0127 13:25:12.103030 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:72c534cbfa71ed3501bee4937ab2beb8fda27b890ef7a26789824f52710b3846" Jan 27 13:25:12 crc kubenswrapper[4786]: E0127 13:25:12.103828 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:72c534cbfa71ed3501bee4937ab2beb8fda27b890ef7a26789824f52710b3846,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6jnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-694cf4f878-xdrln_openstack-operators(43d55b3f-3bd8-4083-9e0d-f398938a47e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 13:25:12 crc kubenswrapper[4786]: E0127 13:25:12.105015 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" podUID="43d55b3f-3bd8-4083-9e0d-f398938a47e6" Jan 27 13:25:12 crc kubenswrapper[4786]: E0127 13:25:12.659445 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:72c534cbfa71ed3501bee4937ab2beb8fda27b890ef7a26789824f52710b3846\\\"\"" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" podUID="43d55b3f-3bd8-4083-9e0d-f398938a47e6" Jan 27 13:25:13 crc kubenswrapper[4786]: W0127 13:25:13.142120 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7f033ce_43f9_425f_a74c_65735b66f5b8.slice/crio-5effc62c90124e4107238b89ce52fb3d668c9de4df3fa050e241035777aca27a WatchSource:0}: Error finding container 5effc62c90124e4107238b89ce52fb3d668c9de4df3fa050e241035777aca27a: Status 404 returned error can't find the container with id 5effc62c90124e4107238b89ce52fb3d668c9de4df3fa050e241035777aca27a Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.142210 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v"] Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.540744 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lspl5" event={"ID":"cfc6eb47-18a2-442a-a1d8-ddec61462156","Type":"ContainerStarted","Data":"72452e16194b36803e6d86c8b7d9c596106ef16f3725aeeb7811beab12614f81"} Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.541201 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lspl5" Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.543625 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-km9zd" event={"ID":"fd3a1177-720b-4e0d-83d9-9ea046369690","Type":"ContainerStarted","Data":"6086d53b2c646514f0bd4628d4ba9ac849d995ec6f14621241e294dbf8dee653"} Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.545942 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-c2c8q" event={"ID":"4058f919-d5c7-4f73-9c8a-432409f9022a","Type":"ContainerStarted","Data":"b2c60cf74ba9e68bf4bb94eca83f1c2d4031e327e6b7def0871b45f03e38d5e3"} Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.546134 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-c2c8q" Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.547516 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" event={"ID":"45599804-69cd-44f0-bb76-a15e5a3ff700","Type":"ContainerStarted","Data":"2f1fbd629db101575ec84d154371774d393d589f731fb8258d56562cd8a696bb"} Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.547645 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.548954 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6p8s9" event={"ID":"5614e239-8fc3-4091-aad4-55a217ca1092","Type":"ContainerStarted","Data":"4771327f3f69e57eeee1e5a2f70e9ae9f1c454761bd63033e20dd1c493ceab08"} Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.549533 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6p8s9" Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.550735 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" event={"ID":"1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43","Type":"ContainerStarted","Data":"206bdccfb8f90499d495acca02e13de635bf3c8ca16e54032fd3e46e47c8d1bc"} Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.551121 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.552269 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" event={"ID":"d7f033ce-43f9-425f-a74c-65735b66f5b8","Type":"ContainerStarted","Data":"8260f2666436f0ca497d71679aedd4959a78d0d46e4cf85fc1a48070ec23609e"} Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.552291 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" event={"ID":"d7f033ce-43f9-425f-a74c-65735b66f5b8","Type":"ContainerStarted","Data":"5effc62c90124e4107238b89ce52fb3d668c9de4df3fa050e241035777aca27a"} Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.552383 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.554265 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8whbf" event={"ID":"7732c732-60c5-476d-bf01-ed83c38b4d35","Type":"ContainerStarted","Data":"c451158b45a2874ec078c93530c1ed124b274cbb9491e0fba11c611fb2b0076f"} Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.554427 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8whbf" Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.578593 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lspl5" podStartSLOduration=2.918764741 podStartE2EDuration="43.578571858s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:32.065489492 +0000 UTC m=+1055.276103611" lastFinishedPulling="2026-01-27 13:25:12.725296589 +0000 UTC m=+1095.935910728" observedRunningTime="2026-01-27 13:25:13.576726277 +0000 UTC m=+1096.787340396" watchObservedRunningTime="2026-01-27 13:25:13.578571858 +0000 UTC m=+1096.789185977" Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.598498 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6p8s9" podStartSLOduration=3.055329349 podStartE2EDuration="43.598475585s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:32.182305657 +0000 UTC m=+1055.392919776" lastFinishedPulling="2026-01-27 13:25:12.725451883 +0000 UTC m=+1095.936066012" observedRunningTime="2026-01-27 13:25:13.597439087 +0000 UTC m=+1096.808053236" watchObservedRunningTime="2026-01-27 13:25:13.598475585 +0000 UTC m=+1096.809089704" Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.628974 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-km9zd" podStartSLOduration=2.637013504 podStartE2EDuration="42.628950364s" podCreationTimestamp="2026-01-27 13:24:31 +0000 UTC" firstStartedPulling="2026-01-27 13:24:32.731660863 +0000 UTC m=+1055.942274972" lastFinishedPulling="2026-01-27 13:25:12.723597713 +0000 UTC m=+1095.934211832" observedRunningTime="2026-01-27 13:25:13.615212696 +0000 UTC m=+1096.825826805" watchObservedRunningTime="2026-01-27 13:25:13.628950364 +0000 UTC m=+1096.839564483" Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.634228 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8whbf" podStartSLOduration=2.487594442 podStartE2EDuration="42.634208729s" podCreationTimestamp="2026-01-27 13:24:31 +0000 UTC" firstStartedPulling="2026-01-27 13:24:32.576937515 +0000 UTC m=+1055.787551634" lastFinishedPulling="2026-01-27 13:25:12.723551802 +0000 UTC m=+1095.934165921" observedRunningTime="2026-01-27 13:25:13.62738632 +0000 UTC m=+1096.838000439" watchObservedRunningTime="2026-01-27 13:25:13.634208729 +0000 UTC m=+1096.844822858" Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.648151 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-c2c8q" podStartSLOduration=3.32434451 podStartE2EDuration="43.648133192s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:32.399847382 +0000 UTC m=+1055.610461511" lastFinishedPulling="2026-01-27 13:25:12.723636084 +0000 UTC m=+1095.934250193" observedRunningTime="2026-01-27 13:25:13.641983072 +0000 UTC m=+1096.852597201" watchObservedRunningTime="2026-01-27 13:25:13.648133192 +0000 UTC m=+1096.858747311" Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.670467 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" podStartSLOduration=3.5433791980000002 podStartE2EDuration="43.670448136s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:32.598391695 +0000 UTC m=+1055.809005814" lastFinishedPulling="2026-01-27 13:25:12.725460633 +0000 UTC m=+1095.936074752" observedRunningTime="2026-01-27 13:25:13.669757026 +0000 UTC m=+1096.880371155" watchObservedRunningTime="2026-01-27 13:25:13.670448136 +0000 UTC m=+1096.881062245" Jan 27 13:25:13 crc kubenswrapper[4786]: I0127 13:25:13.705972 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" podStartSLOduration=42.705956412 podStartE2EDuration="42.705956412s" podCreationTimestamp="2026-01-27 13:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:25:13.703312979 +0000 UTC m=+1096.913927088" watchObservedRunningTime="2026-01-27 13:25:13.705956412 +0000 UTC m=+1096.916570521" Jan 27 13:25:21 crc kubenswrapper[4786]: I0127 13:25:21.267132 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lspl5" Jan 27 13:25:21 crc kubenswrapper[4786]: I0127 13:25:21.283824 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" podStartSLOduration=32.009737762 podStartE2EDuration="51.283808072s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:53.458561681 +0000 UTC m=+1076.669175800" lastFinishedPulling="2026-01-27 13:25:12.732631991 +0000 UTC m=+1095.943246110" observedRunningTime="2026-01-27 13:25:13.733057798 +0000 UTC m=+1096.943671917" watchObservedRunningTime="2026-01-27 13:25:21.283808072 +0000 UTC m=+1104.494422191" Jan 27 13:25:21 crc kubenswrapper[4786]: I0127 13:25:21.287429 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-6p8s9" Jan 27 13:25:21 crc kubenswrapper[4786]: I0127 13:25:21.341619 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-c2c8q" Jan 27 13:25:21 crc kubenswrapper[4786]: I0127 13:25:21.394778 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" Jan 27 13:25:21 crc kubenswrapper[4786]: I0127 13:25:21.780658 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8whbf" Jan 27 13:25:23 crc kubenswrapper[4786]: I0127 13:25:23.764827 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-596b94879d-2gz6v" Jan 27 13:25:24 crc kubenswrapper[4786]: I0127 13:25:24.639380 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" event={"ID":"43d55b3f-3bd8-4083-9e0d-f398938a47e6","Type":"ContainerStarted","Data":"61d4d5ad4cd6eb0d62fcf35e0e98b0ab096aab9e9507068b5b80d849d0187d1b"} Jan 27 13:25:24 crc kubenswrapper[4786]: I0127 13:25:24.639997 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" Jan 27 13:25:24 crc kubenswrapper[4786]: I0127 13:25:24.658477 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" podStartSLOduration=24.153422967 podStartE2EDuration="54.658457038s" podCreationTimestamp="2026-01-27 13:24:30 +0000 UTC" firstStartedPulling="2026-01-27 13:24:53.4585573 +0000 UTC m=+1076.669171419" lastFinishedPulling="2026-01-27 13:25:23.963591371 +0000 UTC m=+1107.174205490" observedRunningTime="2026-01-27 13:25:24.656282077 +0000 UTC m=+1107.866896196" watchObservedRunningTime="2026-01-27 13:25:24.658457038 +0000 UTC m=+1107.869071157" Jan 27 13:25:27 crc kubenswrapper[4786]: I0127 13:25:27.188711 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt" Jan 27 13:25:36 crc kubenswrapper[4786]: I0127 13:25:36.802139 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-xdrln" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.318878 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.320592 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.323488 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openshift-service-ca.crt" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.323568 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-default-user" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.323968 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-plugins-conf" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.324046 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-server-dockercfg-prdsq" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.325180 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-erlang-cookie" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.325391 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-server-conf" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.326382 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"kube-root-ca.crt" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.342517 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.419002 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8472c3b-b877-4e6c-992f-f4146f81e3fc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.419072 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjk28\" (UniqueName: \"kubernetes.io/projected/c8472c3b-b877-4e6c-992f-f4146f81e3fc-kube-api-access-zjk28\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.419198 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6b243f98-374a-462d-9a7f-da7b55ffb88c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b243f98-374a-462d-9a7f-da7b55ffb88c\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.419237 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8472c3b-b877-4e6c-992f-f4146f81e3fc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.419294 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8472c3b-b877-4e6c-992f-f4146f81e3fc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.419344 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8472c3b-b877-4e6c-992f-f4146f81e3fc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.419366 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8472c3b-b877-4e6c-992f-f4146f81e3fc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.419384 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8472c3b-b877-4e6c-992f-f4146f81e3fc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.419513 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8472c3b-b877-4e6c-992f-f4146f81e3fc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.520943 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8472c3b-b877-4e6c-992f-f4146f81e3fc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.521005 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8472c3b-b877-4e6c-992f-f4146f81e3fc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.521028 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8472c3b-b877-4e6c-992f-f4146f81e3fc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.521073 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8472c3b-b877-4e6c-992f-f4146f81e3fc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.521767 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8472c3b-b877-4e6c-992f-f4146f81e3fc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.521821 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8472c3b-b877-4e6c-992f-f4146f81e3fc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.522139 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8472c3b-b877-4e6c-992f-f4146f81e3fc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.522220 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjk28\" (UniqueName: \"kubernetes.io/projected/c8472c3b-b877-4e6c-992f-f4146f81e3fc-kube-api-access-zjk28\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.522239 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8472c3b-b877-4e6c-992f-f4146f81e3fc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.522286 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6b243f98-374a-462d-9a7f-da7b55ffb88c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b243f98-374a-462d-9a7f-da7b55ffb88c\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.522308 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8472c3b-b877-4e6c-992f-f4146f81e3fc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.522380 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8472c3b-b877-4e6c-992f-f4146f81e3fc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.523129 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8472c3b-b877-4e6c-992f-f4146f81e3fc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.528321 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8472c3b-b877-4e6c-992f-f4146f81e3fc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.529033 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8472c3b-b877-4e6c-992f-f4146f81e3fc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.529205 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8472c3b-b877-4e6c-992f-f4146f81e3fc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.529713 4786 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.529736 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6b243f98-374a-462d-9a7f-da7b55ffb88c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b243f98-374a-462d-9a7f-da7b55ffb88c\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/03234e88557ffe191507bdd84d21e80cf223ea7715057a50598ddd458e4665c7/globalmount\"" pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.545256 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjk28\" (UniqueName: \"kubernetes.io/projected/c8472c3b-b877-4e6c-992f-f4146f81e3fc-kube-api-access-zjk28\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.554949 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6b243f98-374a-462d-9a7f-da7b55ffb88c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6b243f98-374a-462d-9a7f-da7b55ffb88c\") pod \"rabbitmq-server-0\" (UID: \"c8472c3b-b877-4e6c-992f-f4146f81e3fc\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.624114 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.625536 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.627919 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-plugins-conf" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.628187 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-dockercfg-tn6vn" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.628302 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-erlang-cookie" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.628964 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-default-user" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.633657 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-conf" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.637107 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.641208 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.725155 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f76eacb2-75ca-46c4-badb-b1404b018bf6-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.725239 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f76eacb2-75ca-46c4-badb-b1404b018bf6-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.725258 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f76eacb2-75ca-46c4-badb-b1404b018bf6-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.725277 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f76eacb2-75ca-46c4-badb-b1404b018bf6-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.725304 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-51f79b92-838d-4e09-98b0-f3cb640c8930\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51f79b92-838d-4e09-98b0-f3cb640c8930\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.725324 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f76eacb2-75ca-46c4-badb-b1404b018bf6-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.725346 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f76eacb2-75ca-46c4-badb-b1404b018bf6-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.725375 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f76eacb2-75ca-46c4-badb-b1404b018bf6-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.725396 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6hr6\" (UniqueName: \"kubernetes.io/projected/f76eacb2-75ca-46c4-badb-b1404b018bf6-kube-api-access-h6hr6\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.826711 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f76eacb2-75ca-46c4-badb-b1404b018bf6-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.827025 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6hr6\" (UniqueName: \"kubernetes.io/projected/f76eacb2-75ca-46c4-badb-b1404b018bf6-kube-api-access-h6hr6\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.827050 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f76eacb2-75ca-46c4-badb-b1404b018bf6-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.827115 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f76eacb2-75ca-46c4-badb-b1404b018bf6-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.827134 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f76eacb2-75ca-46c4-badb-b1404b018bf6-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.827152 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f76eacb2-75ca-46c4-badb-b1404b018bf6-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.827184 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-51f79b92-838d-4e09-98b0-f3cb640c8930\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51f79b92-838d-4e09-98b0-f3cb640c8930\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.827210 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f76eacb2-75ca-46c4-badb-b1404b018bf6-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.827232 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f76eacb2-75ca-46c4-badb-b1404b018bf6-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.827649 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f76eacb2-75ca-46c4-badb-b1404b018bf6-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.828226 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f76eacb2-75ca-46c4-badb-b1404b018bf6-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.828303 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f76eacb2-75ca-46c4-badb-b1404b018bf6-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.829224 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f76eacb2-75ca-46c4-badb-b1404b018bf6-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.834001 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f76eacb2-75ca-46c4-badb-b1404b018bf6-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.834103 4786 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.834150 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-51f79b92-838d-4e09-98b0-f3cb640c8930\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51f79b92-838d-4e09-98b0-f3cb640c8930\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/daeca35ab6a7d54bb0a3f3c8b47a2db00b7d22a1e20d1647744055d1540c7389/globalmount\"" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.835023 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f76eacb2-75ca-46c4-badb-b1404b018bf6-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.835709 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f76eacb2-75ca-46c4-badb-b1404b018bf6-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.855982 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6hr6\" (UniqueName: \"kubernetes.io/projected/f76eacb2-75ca-46c4-badb-b1404b018bf6-kube-api-access-h6hr6\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.860214 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.861292 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.864368 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-svc" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.864637 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config-data" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.864815 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-scripts" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.864961 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-dockercfg-8l6vl" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.869016 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.880277 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"combined-ca-bundle" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.906177 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-51f79b92-838d-4e09-98b0-f3cb640c8930\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51f79b92-838d-4e09-98b0-f3cb640c8930\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"f76eacb2-75ca-46c4-badb-b1404b018bf6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.928148 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d5e1220e-a41a-4e46-890f-3502e548bf66-kolla-config\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.928210 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm2tb\" (UniqueName: \"kubernetes.io/projected/d5e1220e-a41a-4e46-890f-3502e548bf66-kube-api-access-lm2tb\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.928250 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4944229a-bfa6-49fc-a40f-95e8a5af4dac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4944229a-bfa6-49fc-a40f-95e8a5af4dac\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.928278 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d5e1220e-a41a-4e46-890f-3502e548bf66-config-data-default\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.928313 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5e1220e-a41a-4e46-890f-3502e548bf66-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.928345 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d5e1220e-a41a-4e46-890f-3502e548bf66-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.928370 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5e1220e-a41a-4e46-890f-3502e548bf66-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.928415 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e1220e-a41a-4e46-890f-3502e548bf66-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.943059 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.960844 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.962409 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.969236 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-default-user" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.969460 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-plugins-conf" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.969668 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-server-conf" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.970200 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-erlang-cookie" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.970317 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-server-dockercfg-f9jv9" Jan 27 13:25:47 crc kubenswrapper[4786]: I0127 13:25:47.993707 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.029337 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f2857cb-9399-4563-b68e-3b51cbd47f80-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.029565 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4gvg\" (UniqueName: \"kubernetes.io/projected/9f2857cb-9399-4563-b68e-3b51cbd47f80-kube-api-access-p4gvg\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.029588 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f2857cb-9399-4563-b68e-3b51cbd47f80-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.029735 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f2857cb-9399-4563-b68e-3b51cbd47f80-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.029782 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d5e1220e-a41a-4e46-890f-3502e548bf66-kolla-config\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.029807 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f2857cb-9399-4563-b68e-3b51cbd47f80-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.029838 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm2tb\" (UniqueName: \"kubernetes.io/projected/d5e1220e-a41a-4e46-890f-3502e548bf66-kube-api-access-lm2tb\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.029866 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f2857cb-9399-4563-b68e-3b51cbd47f80-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.029886 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4944229a-bfa6-49fc-a40f-95e8a5af4dac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4944229a-bfa6-49fc-a40f-95e8a5af4dac\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.029908 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d5e1220e-a41a-4e46-890f-3502e548bf66-config-data-default\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.029927 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f2857cb-9399-4563-b68e-3b51cbd47f80-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.029959 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-94440d74-6431-48d9-8518-f7315e80ce71\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-94440d74-6431-48d9-8518-f7315e80ce71\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.029980 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5e1220e-a41a-4e46-890f-3502e548bf66-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.030332 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d5e1220e-a41a-4e46-890f-3502e548bf66-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.030373 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5e1220e-a41a-4e46-890f-3502e548bf66-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.030420 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e1220e-a41a-4e46-890f-3502e548bf66-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.030448 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f2857cb-9399-4563-b68e-3b51cbd47f80-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.030982 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d5e1220e-a41a-4e46-890f-3502e548bf66-config-data-default\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.031415 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d5e1220e-a41a-4e46-890f-3502e548bf66-kolla-config\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.031443 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d5e1220e-a41a-4e46-890f-3502e548bf66-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.032298 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5e1220e-a41a-4e46-890f-3502e548bf66-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.033745 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5e1220e-a41a-4e46-890f-3502e548bf66-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.033851 4786 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.033891 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4944229a-bfa6-49fc-a40f-95e8a5af4dac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4944229a-bfa6-49fc-a40f-95e8a5af4dac\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/56346058e805ef342fa271a6f5fc0df714ad9cb6492eef36521ca561b9f20e86/globalmount\"" pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.038817 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e1220e-a41a-4e46-890f-3502e548bf66-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.055258 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm2tb\" (UniqueName: \"kubernetes.io/projected/d5e1220e-a41a-4e46-890f-3502e548bf66-kube-api-access-lm2tb\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.092537 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.093961 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4944229a-bfa6-49fc-a40f-95e8a5af4dac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4944229a-bfa6-49fc-a40f-95e8a5af4dac\") pod \"openstack-galera-0\" (UID: \"d5e1220e-a41a-4e46-890f-3502e548bf66\") " pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.131802 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f2857cb-9399-4563-b68e-3b51cbd47f80-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.131879 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4gvg\" (UniqueName: \"kubernetes.io/projected/9f2857cb-9399-4563-b68e-3b51cbd47f80-kube-api-access-p4gvg\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.131945 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f2857cb-9399-4563-b68e-3b51cbd47f80-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.131965 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f2857cb-9399-4563-b68e-3b51cbd47f80-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.132563 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f2857cb-9399-4563-b68e-3b51cbd47f80-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.132919 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f2857cb-9399-4563-b68e-3b51cbd47f80-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.132973 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f2857cb-9399-4563-b68e-3b51cbd47f80-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.133397 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f2857cb-9399-4563-b68e-3b51cbd47f80-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.133447 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f2857cb-9399-4563-b68e-3b51cbd47f80-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.133476 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-94440d74-6431-48d9-8518-f7315e80ce71\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-94440d74-6431-48d9-8518-f7315e80ce71\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.133517 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f2857cb-9399-4563-b68e-3b51cbd47f80-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.134443 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f2857cb-9399-4563-b68e-3b51cbd47f80-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.134892 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f2857cb-9399-4563-b68e-3b51cbd47f80-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.135015 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f2857cb-9399-4563-b68e-3b51cbd47f80-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.143430 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f2857cb-9399-4563-b68e-3b51cbd47f80-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.144178 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f2857cb-9399-4563-b68e-3b51cbd47f80-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.144877 4786 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.148720 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-94440d74-6431-48d9-8518-f7315e80ce71\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-94440d74-6431-48d9-8518-f7315e80ce71\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a464a81e15caf5f7f61e9c65fcbd2e1b55521789e2a15378dd3673cdaa7557ef/globalmount\"" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.161441 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4gvg\" (UniqueName: \"kubernetes.io/projected/9f2857cb-9399-4563-b68e-3b51cbd47f80-kube-api-access-p4gvg\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.200523 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.204278 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-94440d74-6431-48d9-8518-f7315e80ce71\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-94440d74-6431-48d9-8518-f7315e80ce71\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f2857cb-9399-4563-b68e-3b51cbd47f80\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.276691 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.290508 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.291776 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.293161 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.296616 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"memcached-memcached-dockercfg-tszsf" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.296791 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"memcached-config-data" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.316014 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.335669 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a011c0d3-4039-465f-9ea6-acad60c397dd-kolla-config\") pod \"memcached-0\" (UID: \"a011c0d3-4039-465f-9ea6-acad60c397dd\") " pod="nova-kuttl-default/memcached-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.335715 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4sqb\" (UniqueName: \"kubernetes.io/projected/a011c0d3-4039-465f-9ea6-acad60c397dd-kube-api-access-q4sqb\") pod \"memcached-0\" (UID: \"a011c0d3-4039-465f-9ea6-acad60c397dd\") " pod="nova-kuttl-default/memcached-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.335766 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a011c0d3-4039-465f-9ea6-acad60c397dd-config-data\") pod \"memcached-0\" (UID: \"a011c0d3-4039-465f-9ea6-acad60c397dd\") " pod="nova-kuttl-default/memcached-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.437874 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a011c0d3-4039-465f-9ea6-acad60c397dd-kolla-config\") pod \"memcached-0\" (UID: \"a011c0d3-4039-465f-9ea6-acad60c397dd\") " pod="nova-kuttl-default/memcached-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.438213 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4sqb\" (UniqueName: \"kubernetes.io/projected/a011c0d3-4039-465f-9ea6-acad60c397dd-kube-api-access-q4sqb\") pod \"memcached-0\" (UID: \"a011c0d3-4039-465f-9ea6-acad60c397dd\") " pod="nova-kuttl-default/memcached-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.438264 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a011c0d3-4039-465f-9ea6-acad60c397dd-config-data\") pod \"memcached-0\" (UID: \"a011c0d3-4039-465f-9ea6-acad60c397dd\") " pod="nova-kuttl-default/memcached-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.439057 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a011c0d3-4039-465f-9ea6-acad60c397dd-kolla-config\") pod \"memcached-0\" (UID: \"a011c0d3-4039-465f-9ea6-acad60c397dd\") " pod="nova-kuttl-default/memcached-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.439121 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a011c0d3-4039-465f-9ea6-acad60c397dd-config-data\") pod \"memcached-0\" (UID: \"a011c0d3-4039-465f-9ea6-acad60c397dd\") " pod="nova-kuttl-default/memcached-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.460480 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4sqb\" (UniqueName: \"kubernetes.io/projected/a011c0d3-4039-465f-9ea6-acad60c397dd-kube-api-access-q4sqb\") pod \"memcached-0\" (UID: \"a011c0d3-4039-465f-9ea6-acad60c397dd\") " pod="nova-kuttl-default/memcached-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.640053 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.746171 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.820884 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"f76eacb2-75ca-46c4-badb-b1404b018bf6","Type":"ContainerStarted","Data":"07e64461d564320295537d5dbc63bd1b64df9e1a79b1b9bb6ece1047ff8f5f09"} Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.823105 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"d5e1220e-a41a-4e46-890f-3502e548bf66","Type":"ContainerStarted","Data":"8b54649ea1f4ff828e25d3663a3ddfea6344590d49fd18b31649b8ef7870da7a"} Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.824280 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"c8472c3b-b877-4e6c-992f-f4146f81e3fc","Type":"ContainerStarted","Data":"ac1c9046fcedd0e1006770c242cb59552f3f5967a7eda1141f7208936ff33dcc"} Jan 27 13:25:48 crc kubenswrapper[4786]: I0127 13:25:48.841208 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.098852 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 27 13:25:49 crc kubenswrapper[4786]: W0127 13:25:49.113130 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda011c0d3_4039_465f_9ea6_acad60c397dd.slice/crio-1e990be3f2933c957f232857b052aae36dc0c11a91e6b8e69c714d548f0450b7 WatchSource:0}: Error finding container 1e990be3f2933c957f232857b052aae36dc0c11a91e6b8e69c714d548f0450b7: Status 404 returned error can't find the container with id 1e990be3f2933c957f232857b052aae36dc0c11a91e6b8e69c714d548f0450b7 Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.341629 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.343161 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.351987 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-cell1-dockercfg-jrztl" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.352239 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-config-data" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.352658 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-scripts" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.352769 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-cell1-svc" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.380384 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.464273 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7a4e6dad-e854-4ecd-9441-04e72893ea29-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.464331 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7a4e6dad-e854-4ecd-9441-04e72893ea29-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.464393 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a710d178-a022-42eb-bc64-a778141dbdc8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a710d178-a022-42eb-bc64-a778141dbdc8\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.464433 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5vht\" (UniqueName: \"kubernetes.io/projected/7a4e6dad-e854-4ecd-9441-04e72893ea29-kube-api-access-b5vht\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.464448 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7a4e6dad-e854-4ecd-9441-04e72893ea29-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.464466 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a4e6dad-e854-4ecd-9441-04e72893ea29-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.464482 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a4e6dad-e854-4ecd-9441-04e72893ea29-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.464501 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4e6dad-e854-4ecd-9441-04e72893ea29-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.566251 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a710d178-a022-42eb-bc64-a778141dbdc8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a710d178-a022-42eb-bc64-a778141dbdc8\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.566380 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5vht\" (UniqueName: \"kubernetes.io/projected/7a4e6dad-e854-4ecd-9441-04e72893ea29-kube-api-access-b5vht\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.566454 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7a4e6dad-e854-4ecd-9441-04e72893ea29-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.566556 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a4e6dad-e854-4ecd-9441-04e72893ea29-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.566576 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a4e6dad-e854-4ecd-9441-04e72893ea29-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.566621 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4e6dad-e854-4ecd-9441-04e72893ea29-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.566664 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7a4e6dad-e854-4ecd-9441-04e72893ea29-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.566722 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7a4e6dad-e854-4ecd-9441-04e72893ea29-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.567849 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7a4e6dad-e854-4ecd-9441-04e72893ea29-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.568683 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7a4e6dad-e854-4ecd-9441-04e72893ea29-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.568710 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7a4e6dad-e854-4ecd-9441-04e72893ea29-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.569053 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a4e6dad-e854-4ecd-9441-04e72893ea29-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.570484 4786 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.570516 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a710d178-a022-42eb-bc64-a778141dbdc8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a710d178-a022-42eb-bc64-a778141dbdc8\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/de6d9c3fb2f0c1e75987ab2420066c0ee44eae094274a94db6768118d0056ed6/globalmount\"" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.574193 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a4e6dad-e854-4ecd-9441-04e72893ea29-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.576004 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4e6dad-e854-4ecd-9441-04e72893ea29-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.584465 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5vht\" (UniqueName: \"kubernetes.io/projected/7a4e6dad-e854-4ecd-9441-04e72893ea29-kube-api-access-b5vht\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.606451 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a710d178-a022-42eb-bc64-a778141dbdc8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a710d178-a022-42eb-bc64-a778141dbdc8\") pod \"openstack-cell1-galera-0\" (UID: \"7a4e6dad-e854-4ecd-9441-04e72893ea29\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.696064 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.836565 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"9f2857cb-9399-4563-b68e-3b51cbd47f80","Type":"ContainerStarted","Data":"755ec33a2d205b5fee6f644849b4769982a4763a997921c10fffdfcb215b13fd"} Jan 27 13:25:49 crc kubenswrapper[4786]: I0127 13:25:49.838519 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"a011c0d3-4039-465f-9ea6-acad60c397dd","Type":"ContainerStarted","Data":"1e990be3f2933c957f232857b052aae36dc0c11a91e6b8e69c714d548f0450b7"} Jan 27 13:25:50 crc kubenswrapper[4786]: I0127 13:25:50.259463 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 27 13:25:50 crc kubenswrapper[4786]: I0127 13:25:50.847151 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"7a4e6dad-e854-4ecd-9441-04e72893ea29","Type":"ContainerStarted","Data":"fc02753e74c71ce079d18e01e752171e3df620b5a72a9be6586b2a69ff4054d9"} Jan 27 13:26:03 crc kubenswrapper[4786]: E0127 13:26:03.494117 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 27 13:26:03 crc kubenswrapper[4786]: E0127 13:26:03.494914 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b5vht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_nova-kuttl-default(7a4e6dad-e854-4ecd-9441-04e72893ea29): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 13:26:03 crc kubenswrapper[4786]: E0127 13:26:03.496386 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/openstack-cell1-galera-0" podUID="7a4e6dad-e854-4ecd-9441-04e72893ea29" Jan 27 13:26:03 crc kubenswrapper[4786]: E0127 13:26:03.512231 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 27 13:26:03 crc kubenswrapper[4786]: E0127 13:26:03.512429 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lm2tb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_nova-kuttl-default(d5e1220e-a41a-4e46-890f-3502e548bf66): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 13:26:03 crc kubenswrapper[4786]: E0127 13:26:03.513936 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/openstack-galera-0" podUID="d5e1220e-a41a-4e46-890f-3502e548bf66" Jan 27 13:26:03 crc kubenswrapper[4786]: I0127 13:26:03.941539 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"a011c0d3-4039-465f-9ea6-acad60c397dd","Type":"ContainerStarted","Data":"d7042d358c09a723e2006ed3b462d6d585ab30c8d7033cc4022a544f610a1822"} Jan 27 13:26:03 crc kubenswrapper[4786]: I0127 13:26:03.942088 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/memcached-0" Jan 27 13:26:03 crc kubenswrapper[4786]: E0127 13:26:03.942976 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="nova-kuttl-default/openstack-galera-0" podUID="d5e1220e-a41a-4e46-890f-3502e548bf66" Jan 27 13:26:03 crc kubenswrapper[4786]: E0127 13:26:03.943049 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="nova-kuttl-default/openstack-cell1-galera-0" podUID="7a4e6dad-e854-4ecd-9441-04e72893ea29" Jan 27 13:26:03 crc kubenswrapper[4786]: I0127 13:26:03.974502 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/memcached-0" podStartSLOduration=1.565080392 podStartE2EDuration="15.974476535s" podCreationTimestamp="2026-01-27 13:25:48 +0000 UTC" firstStartedPulling="2026-01-27 13:25:49.11487117 +0000 UTC m=+1132.325485289" lastFinishedPulling="2026-01-27 13:26:03.524267313 +0000 UTC m=+1146.734881432" observedRunningTime="2026-01-27 13:26:03.973386065 +0000 UTC m=+1147.184000184" watchObservedRunningTime="2026-01-27 13:26:03.974476535 +0000 UTC m=+1147.185090654" Jan 27 13:26:04 crc kubenswrapper[4786]: I0127 13:26:04.949210 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"f76eacb2-75ca-46c4-badb-b1404b018bf6","Type":"ContainerStarted","Data":"ce98e554906531bc687798db6bfccbd8baf593b3d81e10eaa6ef7c89fb23089f"} Jan 27 13:26:04 crc kubenswrapper[4786]: I0127 13:26:04.950743 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"9f2857cb-9399-4563-b68e-3b51cbd47f80","Type":"ContainerStarted","Data":"4de3b0a76f4c6460813696dc313a7e650140921402a84ec4d1f05bec854dbb14"} Jan 27 13:26:05 crc kubenswrapper[4786]: I0127 13:26:05.957839 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"c8472c3b-b877-4e6c-992f-f4146f81e3fc","Type":"ContainerStarted","Data":"2fdd9c1bfcf4e0f00870c8e503c1231219751055cd57961ad56cb325aafafee8"} Jan 27 13:26:08 crc kubenswrapper[4786]: I0127 13:26:08.641739 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/memcached-0" Jan 27 13:26:09 crc kubenswrapper[4786]: I0127 13:26:09.532872 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:26:09 crc kubenswrapper[4786]: I0127 13:26:09.533162 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:26:16 crc kubenswrapper[4786]: I0127 13:26:16.048181 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"7a4e6dad-e854-4ecd-9441-04e72893ea29","Type":"ContainerStarted","Data":"fa47ea6fce0f8303b841367304eba08b8e8f2277da6a102ce4e32ba09289e8bb"} Jan 27 13:26:19 crc kubenswrapper[4786]: I0127 13:26:19.068464 4786 generic.go:334] "Generic (PLEG): container finished" podID="7a4e6dad-e854-4ecd-9441-04e72893ea29" containerID="fa47ea6fce0f8303b841367304eba08b8e8f2277da6a102ce4e32ba09289e8bb" exitCode=0 Jan 27 13:26:19 crc kubenswrapper[4786]: I0127 13:26:19.068566 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"7a4e6dad-e854-4ecd-9441-04e72893ea29","Type":"ContainerDied","Data":"fa47ea6fce0f8303b841367304eba08b8e8f2277da6a102ce4e32ba09289e8bb"} Jan 27 13:26:19 crc kubenswrapper[4786]: I0127 13:26:19.070616 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"d5e1220e-a41a-4e46-890f-3502e548bf66","Type":"ContainerStarted","Data":"d8aaec8a1eb695ce1e9a9cb9b0ae181fdcf4de2890b2ed3588fbff0ccc71d960"} Jan 27 13:26:20 crc kubenswrapper[4786]: I0127 13:26:20.079833 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"7a4e6dad-e854-4ecd-9441-04e72893ea29","Type":"ContainerStarted","Data":"e11c6b523fc651401badae2163d8fe343e5f225c479864df7dd7adf541eca139"} Jan 27 13:26:20 crc kubenswrapper[4786]: I0127 13:26:20.104235 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-cell1-galera-0" podStartSLOduration=7.482362095 podStartE2EDuration="32.10421148s" podCreationTimestamp="2026-01-27 13:25:48 +0000 UTC" firstStartedPulling="2026-01-27 13:25:50.271189338 +0000 UTC m=+1133.481803457" lastFinishedPulling="2026-01-27 13:26:14.893038723 +0000 UTC m=+1158.103652842" observedRunningTime="2026-01-27 13:26:20.102621897 +0000 UTC m=+1163.313236016" watchObservedRunningTime="2026-01-27 13:26:20.10421148 +0000 UTC m=+1163.314825609" Jan 27 13:26:22 crc kubenswrapper[4786]: I0127 13:26:22.100059 4786 generic.go:334] "Generic (PLEG): container finished" podID="d5e1220e-a41a-4e46-890f-3502e548bf66" containerID="d8aaec8a1eb695ce1e9a9cb9b0ae181fdcf4de2890b2ed3588fbff0ccc71d960" exitCode=0 Jan 27 13:26:22 crc kubenswrapper[4786]: I0127 13:26:22.100193 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"d5e1220e-a41a-4e46-890f-3502e548bf66","Type":"ContainerDied","Data":"d8aaec8a1eb695ce1e9a9cb9b0ae181fdcf4de2890b2ed3588fbff0ccc71d960"} Jan 27 13:26:23 crc kubenswrapper[4786]: I0127 13:26:23.108264 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"d5e1220e-a41a-4e46-890f-3502e548bf66","Type":"ContainerStarted","Data":"75c28ef3612b83a616950f5c15c4132612a94891784c00e3a0671b71b2226871"} Jan 27 13:26:23 crc kubenswrapper[4786]: I0127 13:26:23.134739 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-galera-0" podStartSLOduration=-9223371999.720064 podStartE2EDuration="37.134711618s" podCreationTimestamp="2026-01-27 13:25:46 +0000 UTC" firstStartedPulling="2026-01-27 13:25:48.784148962 +0000 UTC m=+1131.994763091" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:26:23.126345378 +0000 UTC m=+1166.336959517" watchObservedRunningTime="2026-01-27 13:26:23.134711618 +0000 UTC m=+1166.345325737" Jan 27 13:26:28 crc kubenswrapper[4786]: I0127 13:26:28.202374 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:26:28 crc kubenswrapper[4786]: I0127 13:26:28.202983 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:26:28 crc kubenswrapper[4786]: I0127 13:26:28.546689 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:26:29 crc kubenswrapper[4786]: I0127 13:26:29.224257 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-galera-0" Jan 27 13:26:29 crc kubenswrapper[4786]: I0127 13:26:29.696592 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:26:29 crc kubenswrapper[4786]: I0127 13:26:29.696900 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:26:29 crc kubenswrapper[4786]: I0127 13:26:29.759675 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:26:30 crc kubenswrapper[4786]: I0127 13:26:30.220455 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 27 13:26:36 crc kubenswrapper[4786]: I0127 13:26:36.907922 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-pmpsn"] Jan 27 13:26:36 crc kubenswrapper[4786]: I0127 13:26:36.909537 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-pmpsn" Jan 27 13:26:36 crc kubenswrapper[4786]: I0127 13:26:36.911463 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-mariadb-root-db-secret" Jan 27 13:26:36 crc kubenswrapper[4786]: I0127 13:26:36.915839 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-pmpsn"] Jan 27 13:26:36 crc kubenswrapper[4786]: I0127 13:26:36.995963 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911-operator-scripts\") pod \"root-account-create-update-pmpsn\" (UID: \"5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911\") " pod="nova-kuttl-default/root-account-create-update-pmpsn" Jan 27 13:26:36 crc kubenswrapper[4786]: I0127 13:26:36.996130 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcsqh\" (UniqueName: \"kubernetes.io/projected/5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911-kube-api-access-pcsqh\") pod \"root-account-create-update-pmpsn\" (UID: \"5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911\") " pod="nova-kuttl-default/root-account-create-update-pmpsn" Jan 27 13:26:37 crc kubenswrapper[4786]: I0127 13:26:37.097636 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcsqh\" (UniqueName: \"kubernetes.io/projected/5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911-kube-api-access-pcsqh\") pod \"root-account-create-update-pmpsn\" (UID: \"5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911\") " pod="nova-kuttl-default/root-account-create-update-pmpsn" Jan 27 13:26:37 crc kubenswrapper[4786]: I0127 13:26:37.097704 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911-operator-scripts\") pod \"root-account-create-update-pmpsn\" (UID: \"5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911\") " pod="nova-kuttl-default/root-account-create-update-pmpsn" Jan 27 13:26:37 crc kubenswrapper[4786]: I0127 13:26:37.098429 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911-operator-scripts\") pod \"root-account-create-update-pmpsn\" (UID: \"5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911\") " pod="nova-kuttl-default/root-account-create-update-pmpsn" Jan 27 13:26:37 crc kubenswrapper[4786]: I0127 13:26:37.115779 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcsqh\" (UniqueName: \"kubernetes.io/projected/5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911-kube-api-access-pcsqh\") pod \"root-account-create-update-pmpsn\" (UID: \"5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911\") " pod="nova-kuttl-default/root-account-create-update-pmpsn" Jan 27 13:26:37 crc kubenswrapper[4786]: I0127 13:26:37.221037 4786 generic.go:334] "Generic (PLEG): container finished" podID="9f2857cb-9399-4563-b68e-3b51cbd47f80" containerID="4de3b0a76f4c6460813696dc313a7e650140921402a84ec4d1f05bec854dbb14" exitCode=0 Jan 27 13:26:37 crc kubenswrapper[4786]: I0127 13:26:37.221129 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"9f2857cb-9399-4563-b68e-3b51cbd47f80","Type":"ContainerDied","Data":"4de3b0a76f4c6460813696dc313a7e650140921402a84ec4d1f05bec854dbb14"} Jan 27 13:26:37 crc kubenswrapper[4786]: I0127 13:26:37.224012 4786 generic.go:334] "Generic (PLEG): container finished" podID="f76eacb2-75ca-46c4-badb-b1404b018bf6" containerID="ce98e554906531bc687798db6bfccbd8baf593b3d81e10eaa6ef7c89fb23089f" exitCode=0 Jan 27 13:26:37 crc kubenswrapper[4786]: I0127 13:26:37.224065 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"f76eacb2-75ca-46c4-badb-b1404b018bf6","Type":"ContainerDied","Data":"ce98e554906531bc687798db6bfccbd8baf593b3d81e10eaa6ef7c89fb23089f"} Jan 27 13:26:37 crc kubenswrapper[4786]: I0127 13:26:37.225789 4786 generic.go:334] "Generic (PLEG): container finished" podID="c8472c3b-b877-4e6c-992f-f4146f81e3fc" containerID="2fdd9c1bfcf4e0f00870c8e503c1231219751055cd57961ad56cb325aafafee8" exitCode=0 Jan 27 13:26:37 crc kubenswrapper[4786]: I0127 13:26:37.225824 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"c8472c3b-b877-4e6c-992f-f4146f81e3fc","Type":"ContainerDied","Data":"2fdd9c1bfcf4e0f00870c8e503c1231219751055cd57961ad56cb325aafafee8"} Jan 27 13:26:37 crc kubenswrapper[4786]: I0127 13:26:37.242312 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-pmpsn" Jan 27 13:26:37 crc kubenswrapper[4786]: I0127 13:26:37.697851 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-pmpsn"] Jan 27 13:26:37 crc kubenswrapper[4786]: W0127 13:26:37.705285 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ef5cfeb_c8dd_4019_a3c0_2fd89aafb911.slice/crio-b53838df1c7174b0ed0e51474cb7b4e4410550ec9f089eebc1c41125c04268e8 WatchSource:0}: Error finding container b53838df1c7174b0ed0e51474cb7b4e4410550ec9f089eebc1c41125c04268e8: Status 404 returned error can't find the container with id b53838df1c7174b0ed0e51474cb7b4e4410550ec9f089eebc1c41125c04268e8 Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.118911 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-create-grv59"] Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.120124 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-grv59" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.143248 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-grv59"] Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.211093 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-390c-account-create-update-74l8w"] Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.212439 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-390c-account-create-update-74l8w" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.220754 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-db-secret" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.222593 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9skn8\" (UniqueName: \"kubernetes.io/projected/ad879844-692e-4c55-8557-499c93a029c2-kube-api-access-9skn8\") pod \"keystone-db-create-grv59\" (UID: \"ad879844-692e-4c55-8557-499c93a029c2\") " pod="nova-kuttl-default/keystone-db-create-grv59" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.222764 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad879844-692e-4c55-8557-499c93a029c2-operator-scripts\") pod \"keystone-db-create-grv59\" (UID: \"ad879844-692e-4c55-8557-499c93a029c2\") " pod="nova-kuttl-default/keystone-db-create-grv59" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.224366 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-390c-account-create-update-74l8w"] Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.241465 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"c8472c3b-b877-4e6c-992f-f4146f81e3fc","Type":"ContainerStarted","Data":"87f7d7b655de263a639f05fd5e444f2d9c78db05acce57a78dff47ff8392cab9"} Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.241748 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.245050 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"9f2857cb-9399-4563-b68e-3b51cbd47f80","Type":"ContainerStarted","Data":"6836e6b34a82ccefd2657eb4cd001973f0f3ac60541c0af30a23ed320f2428a0"} Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.245324 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.247586 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-pmpsn" event={"ID":"5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911","Type":"ContainerStarted","Data":"4ede8f703d3d70513781218f5a068930f520894b7584266350e0555b4c4d0ddc"} Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.247785 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-pmpsn" event={"ID":"5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911","Type":"ContainerStarted","Data":"b53838df1c7174b0ed0e51474cb7b4e4410550ec9f089eebc1c41125c04268e8"} Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.250449 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"f76eacb2-75ca-46c4-badb-b1404b018bf6","Type":"ContainerStarted","Data":"1e007b7993df4f02b01fbd3e75b2a6d974f5644972fe82f0a1d69049210b07f4"} Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.250663 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.264208 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-server-0" podStartSLOduration=36.867314168 podStartE2EDuration="52.264192615s" podCreationTimestamp="2026-01-27 13:25:46 +0000 UTC" firstStartedPulling="2026-01-27 13:25:48.120880416 +0000 UTC m=+1131.331494535" lastFinishedPulling="2026-01-27 13:26:03.517758863 +0000 UTC m=+1146.728372982" observedRunningTime="2026-01-27 13:26:38.261147711 +0000 UTC m=+1181.471761850" watchObservedRunningTime="2026-01-27 13:26:38.264192615 +0000 UTC m=+1181.474806734" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.291200 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podStartSLOduration=37.61174386 podStartE2EDuration="52.291184488s" podCreationTimestamp="2026-01-27 13:25:46 +0000 UTC" firstStartedPulling="2026-01-27 13:25:48.851721895 +0000 UTC m=+1132.062336024" lastFinishedPulling="2026-01-27 13:26:03.531162533 +0000 UTC m=+1146.741776652" observedRunningTime="2026-01-27 13:26:38.287910148 +0000 UTC m=+1181.498524267" watchObservedRunningTime="2026-01-27 13:26:38.291184488 +0000 UTC m=+1181.501798607" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.323695 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9skn8\" (UniqueName: \"kubernetes.io/projected/ad879844-692e-4c55-8557-499c93a029c2-kube-api-access-9skn8\") pod \"keystone-db-create-grv59\" (UID: \"ad879844-692e-4c55-8557-499c93a029c2\") " pod="nova-kuttl-default/keystone-db-create-grv59" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.323886 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/301e22f5-8034-4f38-a93d-077d430c969e-operator-scripts\") pod \"keystone-390c-account-create-update-74l8w\" (UID: \"301e22f5-8034-4f38-a93d-077d430c969e\") " pod="nova-kuttl-default/keystone-390c-account-create-update-74l8w" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.323915 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ksvg\" (UniqueName: \"kubernetes.io/projected/301e22f5-8034-4f38-a93d-077d430c969e-kube-api-access-5ksvg\") pod \"keystone-390c-account-create-update-74l8w\" (UID: \"301e22f5-8034-4f38-a93d-077d430c969e\") " pod="nova-kuttl-default/keystone-390c-account-create-update-74l8w" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.324039 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad879844-692e-4c55-8557-499c93a029c2-operator-scripts\") pod \"keystone-db-create-grv59\" (UID: \"ad879844-692e-4c55-8557-499c93a029c2\") " pod="nova-kuttl-default/keystone-db-create-grv59" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.325194 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad879844-692e-4c55-8557-499c93a029c2-operator-scripts\") pod \"keystone-db-create-grv59\" (UID: \"ad879844-692e-4c55-8557-499c93a029c2\") " pod="nova-kuttl-default/keystone-db-create-grv59" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.326809 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podStartSLOduration=37.105692638 podStartE2EDuration="52.32679877s" podCreationTimestamp="2026-01-27 13:25:46 +0000 UTC" firstStartedPulling="2026-01-27 13:25:48.294943405 +0000 UTC m=+1131.505557524" lastFinishedPulling="2026-01-27 13:26:03.516049547 +0000 UTC m=+1146.726663656" observedRunningTime="2026-01-27 13:26:38.322942564 +0000 UTC m=+1181.533556683" watchObservedRunningTime="2026-01-27 13:26:38.32679877 +0000 UTC m=+1181.537412889" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.362330 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9skn8\" (UniqueName: \"kubernetes.io/projected/ad879844-692e-4c55-8557-499c93a029c2-kube-api-access-9skn8\") pod \"keystone-db-create-grv59\" (UID: \"ad879844-692e-4c55-8557-499c93a029c2\") " pod="nova-kuttl-default/keystone-db-create-grv59" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.398534 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/root-account-create-update-pmpsn" podStartSLOduration=2.398518108 podStartE2EDuration="2.398518108s" podCreationTimestamp="2026-01-27 13:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:26:38.341049573 +0000 UTC m=+1181.551663702" watchObservedRunningTime="2026-01-27 13:26:38.398518108 +0000 UTC m=+1181.609132227" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.401232 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-create-xhxd7"] Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.402122 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-xhxd7" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.422385 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-xhxd7"] Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.425751 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbfk6\" (UniqueName: \"kubernetes.io/projected/92ed6245-6eb9-4e9c-aca7-3b0d9d205639-kube-api-access-hbfk6\") pod \"placement-db-create-xhxd7\" (UID: \"92ed6245-6eb9-4e9c-aca7-3b0d9d205639\") " pod="nova-kuttl-default/placement-db-create-xhxd7" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.425900 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/301e22f5-8034-4f38-a93d-077d430c969e-operator-scripts\") pod \"keystone-390c-account-create-update-74l8w\" (UID: \"301e22f5-8034-4f38-a93d-077d430c969e\") " pod="nova-kuttl-default/keystone-390c-account-create-update-74l8w" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.425932 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ksvg\" (UniqueName: \"kubernetes.io/projected/301e22f5-8034-4f38-a93d-077d430c969e-kube-api-access-5ksvg\") pod \"keystone-390c-account-create-update-74l8w\" (UID: \"301e22f5-8034-4f38-a93d-077d430c969e\") " pod="nova-kuttl-default/keystone-390c-account-create-update-74l8w" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.425982 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92ed6245-6eb9-4e9c-aca7-3b0d9d205639-operator-scripts\") pod \"placement-db-create-xhxd7\" (UID: \"92ed6245-6eb9-4e9c-aca7-3b0d9d205639\") " pod="nova-kuttl-default/placement-db-create-xhxd7" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.442204 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ksvg\" (UniqueName: \"kubernetes.io/projected/301e22f5-8034-4f38-a93d-077d430c969e-kube-api-access-5ksvg\") pod \"keystone-390c-account-create-update-74l8w\" (UID: \"301e22f5-8034-4f38-a93d-077d430c969e\") " pod="nova-kuttl-default/keystone-390c-account-create-update-74l8w" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.443318 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/301e22f5-8034-4f38-a93d-077d430c969e-operator-scripts\") pod \"keystone-390c-account-create-update-74l8w\" (UID: \"301e22f5-8034-4f38-a93d-077d430c969e\") " pod="nova-kuttl-default/keystone-390c-account-create-update-74l8w" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.475144 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-grv59" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.527306 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92ed6245-6eb9-4e9c-aca7-3b0d9d205639-operator-scripts\") pod \"placement-db-create-xhxd7\" (UID: \"92ed6245-6eb9-4e9c-aca7-3b0d9d205639\") " pod="nova-kuttl-default/placement-db-create-xhxd7" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.527400 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbfk6\" (UniqueName: \"kubernetes.io/projected/92ed6245-6eb9-4e9c-aca7-3b0d9d205639-kube-api-access-hbfk6\") pod \"placement-db-create-xhxd7\" (UID: \"92ed6245-6eb9-4e9c-aca7-3b0d9d205639\") " pod="nova-kuttl-default/placement-db-create-xhxd7" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.528445 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92ed6245-6eb9-4e9c-aca7-3b0d9d205639-operator-scripts\") pod \"placement-db-create-xhxd7\" (UID: \"92ed6245-6eb9-4e9c-aca7-3b0d9d205639\") " pod="nova-kuttl-default/placement-db-create-xhxd7" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.536504 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-390c-account-create-update-74l8w" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.537223 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-1281-account-create-update-kmk5c"] Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.538344 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-1281-account-create-update-kmk5c" Jan 27 13:26:38 crc kubenswrapper[4786]: W0127 13:26:38.540964 4786 reflector.go:561] object-"nova-kuttl-default"/"placement-db-secret": failed to list *v1.Secret: secrets "placement-db-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "nova-kuttl-default": no relationship found between node 'crc' and this object Jan 27 13:26:38 crc kubenswrapper[4786]: E0127 13:26:38.541002 4786 reflector.go:158] "Unhandled Error" err="object-\"nova-kuttl-default\"/\"placement-db-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"placement-db-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"nova-kuttl-default\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.552460 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-1281-account-create-update-kmk5c"] Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.569114 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbfk6\" (UniqueName: \"kubernetes.io/projected/92ed6245-6eb9-4e9c-aca7-3b0d9d205639-kube-api-access-hbfk6\") pod \"placement-db-create-xhxd7\" (UID: \"92ed6245-6eb9-4e9c-aca7-3b0d9d205639\") " pod="nova-kuttl-default/placement-db-create-xhxd7" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.629005 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d091d7-88cf-41d7-8ae2-efc780052648-operator-scripts\") pod \"placement-1281-account-create-update-kmk5c\" (UID: \"63d091d7-88cf-41d7-8ae2-efc780052648\") " pod="nova-kuttl-default/placement-1281-account-create-update-kmk5c" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.629314 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtmzq\" (UniqueName: \"kubernetes.io/projected/63d091d7-88cf-41d7-8ae2-efc780052648-kube-api-access-rtmzq\") pod \"placement-1281-account-create-update-kmk5c\" (UID: \"63d091d7-88cf-41d7-8ae2-efc780052648\") " pod="nova-kuttl-default/placement-1281-account-create-update-kmk5c" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.717289 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-xhxd7" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.730869 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtmzq\" (UniqueName: \"kubernetes.io/projected/63d091d7-88cf-41d7-8ae2-efc780052648-kube-api-access-rtmzq\") pod \"placement-1281-account-create-update-kmk5c\" (UID: \"63d091d7-88cf-41d7-8ae2-efc780052648\") " pod="nova-kuttl-default/placement-1281-account-create-update-kmk5c" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.730961 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d091d7-88cf-41d7-8ae2-efc780052648-operator-scripts\") pod \"placement-1281-account-create-update-kmk5c\" (UID: \"63d091d7-88cf-41d7-8ae2-efc780052648\") " pod="nova-kuttl-default/placement-1281-account-create-update-kmk5c" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.731964 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d091d7-88cf-41d7-8ae2-efc780052648-operator-scripts\") pod \"placement-1281-account-create-update-kmk5c\" (UID: \"63d091d7-88cf-41d7-8ae2-efc780052648\") " pod="nova-kuttl-default/placement-1281-account-create-update-kmk5c" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.749723 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtmzq\" (UniqueName: \"kubernetes.io/projected/63d091d7-88cf-41d7-8ae2-efc780052648-kube-api-access-rtmzq\") pod \"placement-1281-account-create-update-kmk5c\" (UID: \"63d091d7-88cf-41d7-8ae2-efc780052648\") " pod="nova-kuttl-default/placement-1281-account-create-update-kmk5c" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.852727 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-1281-account-create-update-kmk5c" Jan 27 13:26:38 crc kubenswrapper[4786]: I0127 13:26:38.998749 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-grv59"] Jan 27 13:26:39 crc kubenswrapper[4786]: I0127 13:26:39.059106 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-390c-account-create-update-74l8w"] Jan 27 13:26:39 crc kubenswrapper[4786]: I0127 13:26:39.176887 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-xhxd7"] Jan 27 13:26:39 crc kubenswrapper[4786]: I0127 13:26:39.259567 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-390c-account-create-update-74l8w" event={"ID":"301e22f5-8034-4f38-a93d-077d430c969e","Type":"ContainerStarted","Data":"e747ba5a7e22d3f5bdff7bea0b91f835c55049cd5bdf32ef89210caa7b86a6f8"} Jan 27 13:26:39 crc kubenswrapper[4786]: I0127 13:26:39.261311 4786 generic.go:334] "Generic (PLEG): container finished" podID="5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911" containerID="4ede8f703d3d70513781218f5a068930f520894b7584266350e0555b4c4d0ddc" exitCode=0 Jan 27 13:26:39 crc kubenswrapper[4786]: I0127 13:26:39.261392 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-pmpsn" event={"ID":"5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911","Type":"ContainerDied","Data":"4ede8f703d3d70513781218f5a068930f520894b7584266350e0555b4c4d0ddc"} Jan 27 13:26:39 crc kubenswrapper[4786]: I0127 13:26:39.262650 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-xhxd7" event={"ID":"92ed6245-6eb9-4e9c-aca7-3b0d9d205639","Type":"ContainerStarted","Data":"4fc4f4fdf0670238bb1a868a7c026cfcc2a34ca96a41c56fa2b19b6a10fa86f7"} Jan 27 13:26:39 crc kubenswrapper[4786]: I0127 13:26:39.264234 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-grv59" event={"ID":"ad879844-692e-4c55-8557-499c93a029c2","Type":"ContainerStarted","Data":"f960d8f13f37a8c4cf1532fe3201ddf027407312849e1b46952322d3f748618e"} Jan 27 13:26:39 crc kubenswrapper[4786]: W0127 13:26:39.317480 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63d091d7_88cf_41d7_8ae2_efc780052648.slice/crio-01e7e7d3c13a21aad16469fcc39a25750ba8389fd97d1f6a92cdb309f151237f WatchSource:0}: Error finding container 01e7e7d3c13a21aad16469fcc39a25750ba8389fd97d1f6a92cdb309f151237f: Status 404 returned error can't find the container with id 01e7e7d3c13a21aad16469fcc39a25750ba8389fd97d1f6a92cdb309f151237f Jan 27 13:26:39 crc kubenswrapper[4786]: I0127 13:26:39.320215 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-1281-account-create-update-kmk5c"] Jan 27 13:26:39 crc kubenswrapper[4786]: I0127 13:26:39.532977 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:26:39 crc kubenswrapper[4786]: I0127 13:26:39.533028 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:26:39 crc kubenswrapper[4786]: I0127 13:26:39.776656 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-db-secret" Jan 27 13:26:40 crc kubenswrapper[4786]: I0127 13:26:40.272120 4786 generic.go:334] "Generic (PLEG): container finished" podID="63d091d7-88cf-41d7-8ae2-efc780052648" containerID="9b438c7363fe3efde2c4c826c373f3bc6a9166501adbb9ac016ee740e66f09c8" exitCode=0 Jan 27 13:26:40 crc kubenswrapper[4786]: I0127 13:26:40.272180 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-1281-account-create-update-kmk5c" event={"ID":"63d091d7-88cf-41d7-8ae2-efc780052648","Type":"ContainerDied","Data":"9b438c7363fe3efde2c4c826c373f3bc6a9166501adbb9ac016ee740e66f09c8"} Jan 27 13:26:40 crc kubenswrapper[4786]: I0127 13:26:40.272424 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-1281-account-create-update-kmk5c" event={"ID":"63d091d7-88cf-41d7-8ae2-efc780052648","Type":"ContainerStarted","Data":"01e7e7d3c13a21aad16469fcc39a25750ba8389fd97d1f6a92cdb309f151237f"} Jan 27 13:26:40 crc kubenswrapper[4786]: I0127 13:26:40.273875 4786 generic.go:334] "Generic (PLEG): container finished" podID="92ed6245-6eb9-4e9c-aca7-3b0d9d205639" containerID="21172873558a9e8c39b8255e7ac10559b9658bc6c3d25e87772f7b614d76b9d9" exitCode=0 Jan 27 13:26:40 crc kubenswrapper[4786]: I0127 13:26:40.273931 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-xhxd7" event={"ID":"92ed6245-6eb9-4e9c-aca7-3b0d9d205639","Type":"ContainerDied","Data":"21172873558a9e8c39b8255e7ac10559b9658bc6c3d25e87772f7b614d76b9d9"} Jan 27 13:26:40 crc kubenswrapper[4786]: I0127 13:26:40.275926 4786 generic.go:334] "Generic (PLEG): container finished" podID="ad879844-692e-4c55-8557-499c93a029c2" containerID="07d37153204b0a782613da8a8e923a9a2c87f2bd5c733ca15208b93851f5145c" exitCode=0 Jan 27 13:26:40 crc kubenswrapper[4786]: I0127 13:26:40.276002 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-grv59" event={"ID":"ad879844-692e-4c55-8557-499c93a029c2","Type":"ContainerDied","Data":"07d37153204b0a782613da8a8e923a9a2c87f2bd5c733ca15208b93851f5145c"} Jan 27 13:26:40 crc kubenswrapper[4786]: I0127 13:26:40.277697 4786 generic.go:334] "Generic (PLEG): container finished" podID="301e22f5-8034-4f38-a93d-077d430c969e" containerID="3cb2becea2a689c4096fc0ffe5eb38d7074024a664c3fca7bbb052a1cfd72a4d" exitCode=0 Jan 27 13:26:40 crc kubenswrapper[4786]: I0127 13:26:40.277750 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-390c-account-create-update-74l8w" event={"ID":"301e22f5-8034-4f38-a93d-077d430c969e","Type":"ContainerDied","Data":"3cb2becea2a689c4096fc0ffe5eb38d7074024a664c3fca7bbb052a1cfd72a4d"} Jan 27 13:26:40 crc kubenswrapper[4786]: I0127 13:26:40.541151 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-pmpsn" Jan 27 13:26:40 crc kubenswrapper[4786]: I0127 13:26:40.561288 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcsqh\" (UniqueName: \"kubernetes.io/projected/5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911-kube-api-access-pcsqh\") pod \"5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911\" (UID: \"5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911\") " Jan 27 13:26:40 crc kubenswrapper[4786]: I0127 13:26:40.561343 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911-operator-scripts\") pod \"5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911\" (UID: \"5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911\") " Jan 27 13:26:40 crc kubenswrapper[4786]: I0127 13:26:40.562368 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911" (UID: "5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:26:40 crc kubenswrapper[4786]: I0127 13:26:40.567244 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911-kube-api-access-pcsqh" (OuterVolumeSpecName: "kube-api-access-pcsqh") pod "5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911" (UID: "5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911"). InnerVolumeSpecName "kube-api-access-pcsqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:26:40 crc kubenswrapper[4786]: I0127 13:26:40.662639 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcsqh\" (UniqueName: \"kubernetes.io/projected/5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911-kube-api-access-pcsqh\") on node \"crc\" DevicePath \"\"" Jan 27 13:26:40 crc kubenswrapper[4786]: I0127 13:26:40.662942 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.286257 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-pmpsn" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.286266 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-pmpsn" event={"ID":"5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911","Type":"ContainerDied","Data":"b53838df1c7174b0ed0e51474cb7b4e4410550ec9f089eebc1c41125c04268e8"} Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.286422 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b53838df1c7174b0ed0e51474cb7b4e4410550ec9f089eebc1c41125c04268e8" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.558999 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-1281-account-create-update-kmk5c" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.619733 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d091d7-88cf-41d7-8ae2-efc780052648-operator-scripts\") pod \"63d091d7-88cf-41d7-8ae2-efc780052648\" (UID: \"63d091d7-88cf-41d7-8ae2-efc780052648\") " Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.620564 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63d091d7-88cf-41d7-8ae2-efc780052648-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "63d091d7-88cf-41d7-8ae2-efc780052648" (UID: "63d091d7-88cf-41d7-8ae2-efc780052648"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.620811 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtmzq\" (UniqueName: \"kubernetes.io/projected/63d091d7-88cf-41d7-8ae2-efc780052648-kube-api-access-rtmzq\") pod \"63d091d7-88cf-41d7-8ae2-efc780052648\" (UID: \"63d091d7-88cf-41d7-8ae2-efc780052648\") " Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.621884 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d091d7-88cf-41d7-8ae2-efc780052648-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.626424 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63d091d7-88cf-41d7-8ae2-efc780052648-kube-api-access-rtmzq" (OuterVolumeSpecName: "kube-api-access-rtmzq") pod "63d091d7-88cf-41d7-8ae2-efc780052648" (UID: "63d091d7-88cf-41d7-8ae2-efc780052648"). InnerVolumeSpecName "kube-api-access-rtmzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.723059 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtmzq\" (UniqueName: \"kubernetes.io/projected/63d091d7-88cf-41d7-8ae2-efc780052648-kube-api-access-rtmzq\") on node \"crc\" DevicePath \"\"" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.734967 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-grv59" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.740796 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-xhxd7" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.747879 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-390c-account-create-update-74l8w" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.823996 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad879844-692e-4c55-8557-499c93a029c2-operator-scripts\") pod \"ad879844-692e-4c55-8557-499c93a029c2\" (UID: \"ad879844-692e-4c55-8557-499c93a029c2\") " Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.824050 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92ed6245-6eb9-4e9c-aca7-3b0d9d205639-operator-scripts\") pod \"92ed6245-6eb9-4e9c-aca7-3b0d9d205639\" (UID: \"92ed6245-6eb9-4e9c-aca7-3b0d9d205639\") " Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.824110 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbfk6\" (UniqueName: \"kubernetes.io/projected/92ed6245-6eb9-4e9c-aca7-3b0d9d205639-kube-api-access-hbfk6\") pod \"92ed6245-6eb9-4e9c-aca7-3b0d9d205639\" (UID: \"92ed6245-6eb9-4e9c-aca7-3b0d9d205639\") " Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.824172 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/301e22f5-8034-4f38-a93d-077d430c969e-operator-scripts\") pod \"301e22f5-8034-4f38-a93d-077d430c969e\" (UID: \"301e22f5-8034-4f38-a93d-077d430c969e\") " Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.824225 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ksvg\" (UniqueName: \"kubernetes.io/projected/301e22f5-8034-4f38-a93d-077d430c969e-kube-api-access-5ksvg\") pod \"301e22f5-8034-4f38-a93d-077d430c969e\" (UID: \"301e22f5-8034-4f38-a93d-077d430c969e\") " Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.824263 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9skn8\" (UniqueName: \"kubernetes.io/projected/ad879844-692e-4c55-8557-499c93a029c2-kube-api-access-9skn8\") pod \"ad879844-692e-4c55-8557-499c93a029c2\" (UID: \"ad879844-692e-4c55-8557-499c93a029c2\") " Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.824693 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad879844-692e-4c55-8557-499c93a029c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad879844-692e-4c55-8557-499c93a029c2" (UID: "ad879844-692e-4c55-8557-499c93a029c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.825027 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92ed6245-6eb9-4e9c-aca7-3b0d9d205639-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92ed6245-6eb9-4e9c-aca7-3b0d9d205639" (UID: "92ed6245-6eb9-4e9c-aca7-3b0d9d205639"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.825106 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/301e22f5-8034-4f38-a93d-077d430c969e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "301e22f5-8034-4f38-a93d-077d430c969e" (UID: "301e22f5-8034-4f38-a93d-077d430c969e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.828020 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e22f5-8034-4f38-a93d-077d430c969e-kube-api-access-5ksvg" (OuterVolumeSpecName: "kube-api-access-5ksvg") pod "301e22f5-8034-4f38-a93d-077d430c969e" (UID: "301e22f5-8034-4f38-a93d-077d430c969e"). InnerVolumeSpecName "kube-api-access-5ksvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.828091 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad879844-692e-4c55-8557-499c93a029c2-kube-api-access-9skn8" (OuterVolumeSpecName: "kube-api-access-9skn8") pod "ad879844-692e-4c55-8557-499c93a029c2" (UID: "ad879844-692e-4c55-8557-499c93a029c2"). InnerVolumeSpecName "kube-api-access-9skn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.828394 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92ed6245-6eb9-4e9c-aca7-3b0d9d205639-kube-api-access-hbfk6" (OuterVolumeSpecName: "kube-api-access-hbfk6") pod "92ed6245-6eb9-4e9c-aca7-3b0d9d205639" (UID: "92ed6245-6eb9-4e9c-aca7-3b0d9d205639"). InnerVolumeSpecName "kube-api-access-hbfk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.925966 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ksvg\" (UniqueName: \"kubernetes.io/projected/301e22f5-8034-4f38-a93d-077d430c969e-kube-api-access-5ksvg\") on node \"crc\" DevicePath \"\"" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.926232 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9skn8\" (UniqueName: \"kubernetes.io/projected/ad879844-692e-4c55-8557-499c93a029c2-kube-api-access-9skn8\") on node \"crc\" DevicePath \"\"" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.926333 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad879844-692e-4c55-8557-499c93a029c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.926416 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92ed6245-6eb9-4e9c-aca7-3b0d9d205639-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.926494 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbfk6\" (UniqueName: \"kubernetes.io/projected/92ed6245-6eb9-4e9c-aca7-3b0d9d205639-kube-api-access-hbfk6\") on node \"crc\" DevicePath \"\"" Jan 27 13:26:41 crc kubenswrapper[4786]: I0127 13:26:41.926570 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/301e22f5-8034-4f38-a93d-077d430c969e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:26:42 crc kubenswrapper[4786]: I0127 13:26:42.294467 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-grv59" Jan 27 13:26:42 crc kubenswrapper[4786]: I0127 13:26:42.295756 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-390c-account-create-update-74l8w" Jan 27 13:26:42 crc kubenswrapper[4786]: I0127 13:26:42.294454 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-grv59" event={"ID":"ad879844-692e-4c55-8557-499c93a029c2","Type":"ContainerDied","Data":"f960d8f13f37a8c4cf1532fe3201ddf027407312849e1b46952322d3f748618e"} Jan 27 13:26:42 crc kubenswrapper[4786]: I0127 13:26:42.304625 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f960d8f13f37a8c4cf1532fe3201ddf027407312849e1b46952322d3f748618e" Jan 27 13:26:42 crc kubenswrapper[4786]: I0127 13:26:42.304717 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-390c-account-create-update-74l8w" event={"ID":"301e22f5-8034-4f38-a93d-077d430c969e","Type":"ContainerDied","Data":"e747ba5a7e22d3f5bdff7bea0b91f835c55049cd5bdf32ef89210caa7b86a6f8"} Jan 27 13:26:42 crc kubenswrapper[4786]: I0127 13:26:42.304802 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e747ba5a7e22d3f5bdff7bea0b91f835c55049cd5bdf32ef89210caa7b86a6f8" Jan 27 13:26:42 crc kubenswrapper[4786]: I0127 13:26:42.306664 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-1281-account-create-update-kmk5c" event={"ID":"63d091d7-88cf-41d7-8ae2-efc780052648","Type":"ContainerDied","Data":"01e7e7d3c13a21aad16469fcc39a25750ba8389fd97d1f6a92cdb309f151237f"} Jan 27 13:26:42 crc kubenswrapper[4786]: I0127 13:26:42.306819 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01e7e7d3c13a21aad16469fcc39a25750ba8389fd97d1f6a92cdb309f151237f" Jan 27 13:26:42 crc kubenswrapper[4786]: I0127 13:26:42.307353 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-1281-account-create-update-kmk5c" Jan 27 13:26:42 crc kubenswrapper[4786]: I0127 13:26:42.309693 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-xhxd7" event={"ID":"92ed6245-6eb9-4e9c-aca7-3b0d9d205639","Type":"ContainerDied","Data":"4fc4f4fdf0670238bb1a868a7c026cfcc2a34ca96a41c56fa2b19b6a10fa86f7"} Jan 27 13:26:42 crc kubenswrapper[4786]: I0127 13:26:42.309742 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fc4f4fdf0670238bb1a868a7c026cfcc2a34ca96a41c56fa2b19b6a10fa86f7" Jan 27 13:26:42 crc kubenswrapper[4786]: I0127 13:26:42.309762 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-xhxd7" Jan 27 13:26:43 crc kubenswrapper[4786]: I0127 13:26:43.443910 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-pmpsn"] Jan 27 13:26:43 crc kubenswrapper[4786]: I0127 13:26:43.449783 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-pmpsn"] Jan 27 13:26:43 crc kubenswrapper[4786]: I0127 13:26:43.475897 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911" path="/var/lib/kubelet/pods/5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911/volumes" Jan 27 13:26:47 crc kubenswrapper[4786]: I0127 13:26:47.645368 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/rabbitmq-server-0" podUID="c8472c3b-b877-4e6c-992f-f4146f81e3fc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.94:5672: connect: connection refused" Jan 27 13:26:47 crc kubenswrapper[4786]: I0127 13:26:47.944869 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podUID="f76eacb2-75ca-46c4-badb-b1404b018bf6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5672: connect: connection refused" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.295161 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podUID="9f2857cb-9399-4563-b68e-3b51cbd47f80" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5672: connect: connection refused" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.453921 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-7wz57"] Jan 27 13:26:48 crc kubenswrapper[4786]: E0127 13:26:48.455122 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad879844-692e-4c55-8557-499c93a029c2" containerName="mariadb-database-create" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.455143 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad879844-692e-4c55-8557-499c93a029c2" containerName="mariadb-database-create" Jan 27 13:26:48 crc kubenswrapper[4786]: E0127 13:26:48.455155 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911" containerName="mariadb-account-create-update" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.455161 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911" containerName="mariadb-account-create-update" Jan 27 13:26:48 crc kubenswrapper[4786]: E0127 13:26:48.455169 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92ed6245-6eb9-4e9c-aca7-3b0d9d205639" containerName="mariadb-database-create" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.455174 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="92ed6245-6eb9-4e9c-aca7-3b0d9d205639" containerName="mariadb-database-create" Jan 27 13:26:48 crc kubenswrapper[4786]: E0127 13:26:48.455195 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d091d7-88cf-41d7-8ae2-efc780052648" containerName="mariadb-account-create-update" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.455201 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d091d7-88cf-41d7-8ae2-efc780052648" containerName="mariadb-account-create-update" Jan 27 13:26:48 crc kubenswrapper[4786]: E0127 13:26:48.455212 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="301e22f5-8034-4f38-a93d-077d430c969e" containerName="mariadb-account-create-update" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.455218 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="301e22f5-8034-4f38-a93d-077d430c969e" containerName="mariadb-account-create-update" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.455345 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad879844-692e-4c55-8557-499c93a029c2" containerName="mariadb-database-create" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.455356 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="92ed6245-6eb9-4e9c-aca7-3b0d9d205639" containerName="mariadb-database-create" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.455366 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="63d091d7-88cf-41d7-8ae2-efc780052648" containerName="mariadb-account-create-update" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.455374 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="301e22f5-8034-4f38-a93d-077d430c969e" containerName="mariadb-account-create-update" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.455388 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ef5cfeb-c8dd-4019-a3c0-2fd89aafb911" containerName="mariadb-account-create-update" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.455961 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-7wz57" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.457830 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-cell1-mariadb-root-db-secret" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.466232 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-7wz57"] Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.519733 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn55w\" (UniqueName: \"kubernetes.io/projected/98bca412-7a6c-4bd1-b4d4-2665efe925e4-kube-api-access-gn55w\") pod \"root-account-create-update-7wz57\" (UID: \"98bca412-7a6c-4bd1-b4d4-2665efe925e4\") " pod="nova-kuttl-default/root-account-create-update-7wz57" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.520277 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98bca412-7a6c-4bd1-b4d4-2665efe925e4-operator-scripts\") pod \"root-account-create-update-7wz57\" (UID: \"98bca412-7a6c-4bd1-b4d4-2665efe925e4\") " pod="nova-kuttl-default/root-account-create-update-7wz57" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.621844 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn55w\" (UniqueName: \"kubernetes.io/projected/98bca412-7a6c-4bd1-b4d4-2665efe925e4-kube-api-access-gn55w\") pod \"root-account-create-update-7wz57\" (UID: \"98bca412-7a6c-4bd1-b4d4-2665efe925e4\") " pod="nova-kuttl-default/root-account-create-update-7wz57" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.622215 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98bca412-7a6c-4bd1-b4d4-2665efe925e4-operator-scripts\") pod \"root-account-create-update-7wz57\" (UID: \"98bca412-7a6c-4bd1-b4d4-2665efe925e4\") " pod="nova-kuttl-default/root-account-create-update-7wz57" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.623163 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98bca412-7a6c-4bd1-b4d4-2665efe925e4-operator-scripts\") pod \"root-account-create-update-7wz57\" (UID: \"98bca412-7a6c-4bd1-b4d4-2665efe925e4\") " pod="nova-kuttl-default/root-account-create-update-7wz57" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.648432 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn55w\" (UniqueName: \"kubernetes.io/projected/98bca412-7a6c-4bd1-b4d4-2665efe925e4-kube-api-access-gn55w\") pod \"root-account-create-update-7wz57\" (UID: \"98bca412-7a6c-4bd1-b4d4-2665efe925e4\") " pod="nova-kuttl-default/root-account-create-update-7wz57" Jan 27 13:26:48 crc kubenswrapper[4786]: I0127 13:26:48.775063 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-7wz57" Jan 27 13:26:49 crc kubenswrapper[4786]: I0127 13:26:49.229020 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-7wz57"] Jan 27 13:26:49 crc kubenswrapper[4786]: I0127 13:26:49.360482 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-7wz57" event={"ID":"98bca412-7a6c-4bd1-b4d4-2665efe925e4","Type":"ContainerStarted","Data":"89ce2319c5c730b7a4672ef9875a57bb81719694d3eccb2d0b5250d024e8e934"} Jan 27 13:26:50 crc kubenswrapper[4786]: I0127 13:26:50.368328 4786 generic.go:334] "Generic (PLEG): container finished" podID="98bca412-7a6c-4bd1-b4d4-2665efe925e4" containerID="e14006dbe8e2adfabe4164de8a7d51b722809ddb5dc2e850803133f07afadeee" exitCode=0 Jan 27 13:26:50 crc kubenswrapper[4786]: I0127 13:26:50.368397 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-7wz57" event={"ID":"98bca412-7a6c-4bd1-b4d4-2665efe925e4","Type":"ContainerDied","Data":"e14006dbe8e2adfabe4164de8a7d51b722809ddb5dc2e850803133f07afadeee"} Jan 27 13:26:51 crc kubenswrapper[4786]: I0127 13:26:51.696419 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-7wz57" Jan 27 13:26:51 crc kubenswrapper[4786]: I0127 13:26:51.774664 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn55w\" (UniqueName: \"kubernetes.io/projected/98bca412-7a6c-4bd1-b4d4-2665efe925e4-kube-api-access-gn55w\") pod \"98bca412-7a6c-4bd1-b4d4-2665efe925e4\" (UID: \"98bca412-7a6c-4bd1-b4d4-2665efe925e4\") " Jan 27 13:26:51 crc kubenswrapper[4786]: I0127 13:26:51.774943 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98bca412-7a6c-4bd1-b4d4-2665efe925e4-operator-scripts\") pod \"98bca412-7a6c-4bd1-b4d4-2665efe925e4\" (UID: \"98bca412-7a6c-4bd1-b4d4-2665efe925e4\") " Jan 27 13:26:51 crc kubenswrapper[4786]: I0127 13:26:51.776132 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98bca412-7a6c-4bd1-b4d4-2665efe925e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98bca412-7a6c-4bd1-b4d4-2665efe925e4" (UID: "98bca412-7a6c-4bd1-b4d4-2665efe925e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:26:51 crc kubenswrapper[4786]: I0127 13:26:51.782139 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98bca412-7a6c-4bd1-b4d4-2665efe925e4-kube-api-access-gn55w" (OuterVolumeSpecName: "kube-api-access-gn55w") pod "98bca412-7a6c-4bd1-b4d4-2665efe925e4" (UID: "98bca412-7a6c-4bd1-b4d4-2665efe925e4"). InnerVolumeSpecName "kube-api-access-gn55w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:26:51 crc kubenswrapper[4786]: I0127 13:26:51.877137 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn55w\" (UniqueName: \"kubernetes.io/projected/98bca412-7a6c-4bd1-b4d4-2665efe925e4-kube-api-access-gn55w\") on node \"crc\" DevicePath \"\"" Jan 27 13:26:51 crc kubenswrapper[4786]: I0127 13:26:51.877187 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98bca412-7a6c-4bd1-b4d4-2665efe925e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:26:52 crc kubenswrapper[4786]: I0127 13:26:52.384868 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-7wz57" event={"ID":"98bca412-7a6c-4bd1-b4d4-2665efe925e4","Type":"ContainerDied","Data":"89ce2319c5c730b7a4672ef9875a57bb81719694d3eccb2d0b5250d024e8e934"} Jan 27 13:26:52 crc kubenswrapper[4786]: I0127 13:26:52.384919 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89ce2319c5c730b7a4672ef9875a57bb81719694d3eccb2d0b5250d024e8e934" Jan 27 13:26:52 crc kubenswrapper[4786]: I0127 13:26:52.385218 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-7wz57" Jan 27 13:26:57 crc kubenswrapper[4786]: I0127 13:26:57.645287 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-server-0" Jan 27 13:26:57 crc kubenswrapper[4786]: I0127 13:26:57.944875 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.103928 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-sync-jkjg4"] Jan 27 13:26:58 crc kubenswrapper[4786]: E0127 13:26:58.104256 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bca412-7a6c-4bd1-b4d4-2665efe925e4" containerName="mariadb-account-create-update" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.104273 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bca412-7a6c-4bd1-b4d4-2665efe925e4" containerName="mariadb-account-create-update" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.104437 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="98bca412-7a6c-4bd1-b4d4-2665efe925e4" containerName="mariadb-account-create-update" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.104938 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-jkjg4" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.107303 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-mjw8x" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.109112 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.109620 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.109789 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.114749 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-jkjg4"] Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.264686 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwjng\" (UniqueName: \"kubernetes.io/projected/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-kube-api-access-vwjng\") pod \"keystone-db-sync-jkjg4\" (UID: \"1c0d1344-182d-4ada-9aa9-0f105aaaccc6\") " pod="nova-kuttl-default/keystone-db-sync-jkjg4" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.264754 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-combined-ca-bundle\") pod \"keystone-db-sync-jkjg4\" (UID: \"1c0d1344-182d-4ada-9aa9-0f105aaaccc6\") " pod="nova-kuttl-default/keystone-db-sync-jkjg4" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.264804 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-config-data\") pod \"keystone-db-sync-jkjg4\" (UID: \"1c0d1344-182d-4ada-9aa9-0f105aaaccc6\") " pod="nova-kuttl-default/keystone-db-sync-jkjg4" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.295316 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.365624 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwjng\" (UniqueName: \"kubernetes.io/projected/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-kube-api-access-vwjng\") pod \"keystone-db-sync-jkjg4\" (UID: \"1c0d1344-182d-4ada-9aa9-0f105aaaccc6\") " pod="nova-kuttl-default/keystone-db-sync-jkjg4" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.365693 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-combined-ca-bundle\") pod \"keystone-db-sync-jkjg4\" (UID: \"1c0d1344-182d-4ada-9aa9-0f105aaaccc6\") " pod="nova-kuttl-default/keystone-db-sync-jkjg4" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.365730 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-config-data\") pod \"keystone-db-sync-jkjg4\" (UID: \"1c0d1344-182d-4ada-9aa9-0f105aaaccc6\") " pod="nova-kuttl-default/keystone-db-sync-jkjg4" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.374572 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-combined-ca-bundle\") pod \"keystone-db-sync-jkjg4\" (UID: \"1c0d1344-182d-4ada-9aa9-0f105aaaccc6\") " pod="nova-kuttl-default/keystone-db-sync-jkjg4" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.375007 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-config-data\") pod \"keystone-db-sync-jkjg4\" (UID: \"1c0d1344-182d-4ada-9aa9-0f105aaaccc6\") " pod="nova-kuttl-default/keystone-db-sync-jkjg4" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.385955 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwjng\" (UniqueName: \"kubernetes.io/projected/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-kube-api-access-vwjng\") pod \"keystone-db-sync-jkjg4\" (UID: \"1c0d1344-182d-4ada-9aa9-0f105aaaccc6\") " pod="nova-kuttl-default/keystone-db-sync-jkjg4" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.424066 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-jkjg4" Jan 27 13:26:58 crc kubenswrapper[4786]: I0127 13:26:58.892794 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-jkjg4"] Jan 27 13:26:59 crc kubenswrapper[4786]: I0127 13:26:59.445734 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-jkjg4" event={"ID":"1c0d1344-182d-4ada-9aa9-0f105aaaccc6","Type":"ContainerStarted","Data":"cbd86167a326cd95ad51e46e13c0d481d97fa3f42e14d8f0fa216bc014f46fd2"} Jan 27 13:27:05 crc kubenswrapper[4786]: I0127 13:27:05.492922 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-jkjg4" event={"ID":"1c0d1344-182d-4ada-9aa9-0f105aaaccc6","Type":"ContainerStarted","Data":"d54488c1b2890687f7d5a4af4db34e6f28dc654daf8883bce7c8edac84e2ed23"} Jan 27 13:27:05 crc kubenswrapper[4786]: I0127 13:27:05.510871 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-db-sync-jkjg4" podStartSLOduration=1.961870912 podStartE2EDuration="7.510855043s" podCreationTimestamp="2026-01-27 13:26:58 +0000 UTC" firstStartedPulling="2026-01-27 13:26:58.895185944 +0000 UTC m=+1202.105800063" lastFinishedPulling="2026-01-27 13:27:04.444170075 +0000 UTC m=+1207.654784194" observedRunningTime="2026-01-27 13:27:05.508184109 +0000 UTC m=+1208.718798228" watchObservedRunningTime="2026-01-27 13:27:05.510855043 +0000 UTC m=+1208.721469162" Jan 27 13:27:07 crc kubenswrapper[4786]: I0127 13:27:07.510808 4786 generic.go:334] "Generic (PLEG): container finished" podID="1c0d1344-182d-4ada-9aa9-0f105aaaccc6" containerID="d54488c1b2890687f7d5a4af4db34e6f28dc654daf8883bce7c8edac84e2ed23" exitCode=0 Jan 27 13:27:07 crc kubenswrapper[4786]: I0127 13:27:07.510894 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-jkjg4" event={"ID":"1c0d1344-182d-4ada-9aa9-0f105aaaccc6","Type":"ContainerDied","Data":"d54488c1b2890687f7d5a4af4db34e6f28dc654daf8883bce7c8edac84e2ed23"} Jan 27 13:27:08 crc kubenswrapper[4786]: I0127 13:27:08.789235 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-jkjg4" Jan 27 13:27:08 crc kubenswrapper[4786]: I0127 13:27:08.923723 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwjng\" (UniqueName: \"kubernetes.io/projected/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-kube-api-access-vwjng\") pod \"1c0d1344-182d-4ada-9aa9-0f105aaaccc6\" (UID: \"1c0d1344-182d-4ada-9aa9-0f105aaaccc6\") " Jan 27 13:27:08 crc kubenswrapper[4786]: I0127 13:27:08.923776 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-config-data\") pod \"1c0d1344-182d-4ada-9aa9-0f105aaaccc6\" (UID: \"1c0d1344-182d-4ada-9aa9-0f105aaaccc6\") " Jan 27 13:27:08 crc kubenswrapper[4786]: I0127 13:27:08.923852 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-combined-ca-bundle\") pod \"1c0d1344-182d-4ada-9aa9-0f105aaaccc6\" (UID: \"1c0d1344-182d-4ada-9aa9-0f105aaaccc6\") " Jan 27 13:27:08 crc kubenswrapper[4786]: I0127 13:27:08.929324 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-kube-api-access-vwjng" (OuterVolumeSpecName: "kube-api-access-vwjng") pod "1c0d1344-182d-4ada-9aa9-0f105aaaccc6" (UID: "1c0d1344-182d-4ada-9aa9-0f105aaaccc6"). InnerVolumeSpecName "kube-api-access-vwjng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:27:08 crc kubenswrapper[4786]: I0127 13:27:08.945010 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c0d1344-182d-4ada-9aa9-0f105aaaccc6" (UID: "1c0d1344-182d-4ada-9aa9-0f105aaaccc6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:27:08 crc kubenswrapper[4786]: I0127 13:27:08.960030 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-config-data" (OuterVolumeSpecName: "config-data") pod "1c0d1344-182d-4ada-9aa9-0f105aaaccc6" (UID: "1c0d1344-182d-4ada-9aa9-0f105aaaccc6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.026632 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwjng\" (UniqueName: \"kubernetes.io/projected/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-kube-api-access-vwjng\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.026671 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.026684 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c0d1344-182d-4ada-9aa9-0f105aaaccc6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.531555 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-jkjg4" event={"ID":"1c0d1344-182d-4ada-9aa9-0f105aaaccc6","Type":"ContainerDied","Data":"cbd86167a326cd95ad51e46e13c0d481d97fa3f42e14d8f0fa216bc014f46fd2"} Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.532158 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbd86167a326cd95ad51e46e13c0d481d97fa3f42e14d8f0fa216bc014f46fd2" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.531635 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-jkjg4" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.532587 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.532715 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.532781 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.533367 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a272494933f607d1d0ff23a4dfbd30c05a19b1fad6cb442bff9296b566a9151"} pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.533447 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" containerID="cri-o://7a272494933f607d1d0ff23a4dfbd30c05a19b1fad6cb442bff9296b566a9151" gracePeriod=600 Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.719405 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-4n2pp"] Jan 27 13:27:09 crc kubenswrapper[4786]: E0127 13:27:09.719837 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c0d1344-182d-4ada-9aa9-0f105aaaccc6" containerName="keystone-db-sync" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.719852 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c0d1344-182d-4ada-9aa9-0f105aaaccc6" containerName="keystone-db-sync" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.720054 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c0d1344-182d-4ada-9aa9-0f105aaaccc6" containerName="keystone-db-sync" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.722952 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.730369 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-mjw8x" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.730416 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.730560 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.730694 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.730907 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.736353 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-4n2pp"] Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.871092 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvdnv\" (UniqueName: \"kubernetes.io/projected/3e454e9d-7129-498a-8d93-ae39b37a8f6b-kube-api-access-fvdnv\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.871167 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-credential-keys\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.871253 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-combined-ca-bundle\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.871285 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-config-data\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.871328 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-fernet-keys\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.871384 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-scripts\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.972214 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvdnv\" (UniqueName: \"kubernetes.io/projected/3e454e9d-7129-498a-8d93-ae39b37a8f6b-kube-api-access-fvdnv\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.972521 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-credential-keys\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.972555 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-combined-ca-bundle\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.972572 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-config-data\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.972617 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-fernet-keys\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.972659 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-scripts\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.979071 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-config-data\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.979171 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-scripts\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.979798 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-credential-keys\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.980921 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-combined-ca-bundle\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:09 crc kubenswrapper[4786]: I0127 13:27:09.992135 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-fernet-keys\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.001740 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-sync-fqbs4"] Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.003135 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.006746 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.007775 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-dn4dp" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.007946 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.021072 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-fqbs4"] Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.034296 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvdnv\" (UniqueName: \"kubernetes.io/projected/3e454e9d-7129-498a-8d93-ae39b37a8f6b-kube-api-access-fvdnv\") pod \"keystone-bootstrap-4n2pp\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.059281 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.074119 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78638470-5811-4d8d-9600-aa15f9e5baed-logs\") pod \"placement-db-sync-fqbs4\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.074176 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-config-data\") pod \"placement-db-sync-fqbs4\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.074230 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx74n\" (UniqueName: \"kubernetes.io/projected/78638470-5811-4d8d-9600-aa15f9e5baed-kube-api-access-hx74n\") pod \"placement-db-sync-fqbs4\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.074268 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-combined-ca-bundle\") pod \"placement-db-sync-fqbs4\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.074337 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-scripts\") pod \"placement-db-sync-fqbs4\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.175388 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-scripts\") pod \"placement-db-sync-fqbs4\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.175508 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78638470-5811-4d8d-9600-aa15f9e5baed-logs\") pod \"placement-db-sync-fqbs4\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.175551 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-config-data\") pod \"placement-db-sync-fqbs4\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.175706 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx74n\" (UniqueName: \"kubernetes.io/projected/78638470-5811-4d8d-9600-aa15f9e5baed-kube-api-access-hx74n\") pod \"placement-db-sync-fqbs4\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.175762 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-combined-ca-bundle\") pod \"placement-db-sync-fqbs4\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.176040 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78638470-5811-4d8d-9600-aa15f9e5baed-logs\") pod \"placement-db-sync-fqbs4\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.179124 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-combined-ca-bundle\") pod \"placement-db-sync-fqbs4\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.180147 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-config-data\") pod \"placement-db-sync-fqbs4\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.180633 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-scripts\") pod \"placement-db-sync-fqbs4\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.197173 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx74n\" (UniqueName: \"kubernetes.io/projected/78638470-5811-4d8d-9600-aa15f9e5baed-kube-api-access-hx74n\") pod \"placement-db-sync-fqbs4\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.431258 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.545627 4786 generic.go:334] "Generic (PLEG): container finished" podID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerID="7a272494933f607d1d0ff23a4dfbd30c05a19b1fad6cb442bff9296b566a9151" exitCode=0 Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.545675 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerDied","Data":"7a272494933f607d1d0ff23a4dfbd30c05a19b1fad6cb442bff9296b566a9151"} Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.545700 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerStarted","Data":"f3ed072ebc16e765b7905a1ff330ce7f66cb1fdc6d65470e94dc155e6c8bb633"} Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.545715 4786 scope.go:117] "RemoveContainer" containerID="4d62cb08eff3ad117221cbb57b9b2f848974ad39a63a811cd7c2c3452ac8780c" Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.608847 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-4n2pp"] Jan 27 13:27:10 crc kubenswrapper[4786]: W0127 13:27:10.616912 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e454e9d_7129_498a_8d93_ae39b37a8f6b.slice/crio-439a33a98952b667b901cf80de142638bf82f3d565c19df48462824bb54c6ec5 WatchSource:0}: Error finding container 439a33a98952b667b901cf80de142638bf82f3d565c19df48462824bb54c6ec5: Status 404 returned error can't find the container with id 439a33a98952b667b901cf80de142638bf82f3d565c19df48462824bb54c6ec5 Jan 27 13:27:10 crc kubenswrapper[4786]: I0127 13:27:10.677747 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-fqbs4"] Jan 27 13:27:11 crc kubenswrapper[4786]: I0127 13:27:11.554435 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-fqbs4" event={"ID":"78638470-5811-4d8d-9600-aa15f9e5baed","Type":"ContainerStarted","Data":"cf6932ad2011122b8afa4c82b4e50aac30c3514476665e009ea970e324142d09"} Jan 27 13:27:11 crc kubenswrapper[4786]: I0127 13:27:11.556729 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-4n2pp" event={"ID":"3e454e9d-7129-498a-8d93-ae39b37a8f6b","Type":"ContainerStarted","Data":"ce4f479279a8aa6ac8901f0dde6e54524a6cb6d896f8ad35e5a0df41d4a69e2b"} Jan 27 13:27:11 crc kubenswrapper[4786]: I0127 13:27:11.556805 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-4n2pp" event={"ID":"3e454e9d-7129-498a-8d93-ae39b37a8f6b","Type":"ContainerStarted","Data":"439a33a98952b667b901cf80de142638bf82f3d565c19df48462824bb54c6ec5"} Jan 27 13:27:11 crc kubenswrapper[4786]: I0127 13:27:11.574955 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-4n2pp" podStartSLOduration=2.574938124 podStartE2EDuration="2.574938124s" podCreationTimestamp="2026-01-27 13:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:27:11.571378856 +0000 UTC m=+1214.781992975" watchObservedRunningTime="2026-01-27 13:27:11.574938124 +0000 UTC m=+1214.785552243" Jan 27 13:27:14 crc kubenswrapper[4786]: I0127 13:27:14.586198 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-fqbs4" event={"ID":"78638470-5811-4d8d-9600-aa15f9e5baed","Type":"ContainerStarted","Data":"7119c06e61ccf463fe2783cf6be16c7f81ef95b684bbc210374ec440d95ead92"} Jan 27 13:27:14 crc kubenswrapper[4786]: I0127 13:27:14.587881 4786 generic.go:334] "Generic (PLEG): container finished" podID="3e454e9d-7129-498a-8d93-ae39b37a8f6b" containerID="ce4f479279a8aa6ac8901f0dde6e54524a6cb6d896f8ad35e5a0df41d4a69e2b" exitCode=0 Jan 27 13:27:14 crc kubenswrapper[4786]: I0127 13:27:14.587930 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-4n2pp" event={"ID":"3e454e9d-7129-498a-8d93-ae39b37a8f6b","Type":"ContainerDied","Data":"ce4f479279a8aa6ac8901f0dde6e54524a6cb6d896f8ad35e5a0df41d4a69e2b"} Jan 27 13:27:14 crc kubenswrapper[4786]: I0127 13:27:14.608309 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-db-sync-fqbs4" podStartSLOduration=2.103665391 podStartE2EDuration="5.60829193s" podCreationTimestamp="2026-01-27 13:27:09 +0000 UTC" firstStartedPulling="2026-01-27 13:27:10.684563738 +0000 UTC m=+1213.895177857" lastFinishedPulling="2026-01-27 13:27:14.189190277 +0000 UTC m=+1217.399804396" observedRunningTime="2026-01-27 13:27:14.601787591 +0000 UTC m=+1217.812401720" watchObservedRunningTime="2026-01-27 13:27:14.60829193 +0000 UTC m=+1217.818906049" Jan 27 13:27:15 crc kubenswrapper[4786]: I0127 13:27:15.889121 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:15 crc kubenswrapper[4786]: I0127 13:27:15.960290 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-credential-keys\") pod \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " Jan 27 13:27:15 crc kubenswrapper[4786]: I0127 13:27:15.960360 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-config-data\") pod \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " Jan 27 13:27:15 crc kubenswrapper[4786]: I0127 13:27:15.960419 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-fernet-keys\") pod \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " Jan 27 13:27:15 crc kubenswrapper[4786]: I0127 13:27:15.960478 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-combined-ca-bundle\") pod \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " Jan 27 13:27:15 crc kubenswrapper[4786]: I0127 13:27:15.960506 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-scripts\") pod \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " Jan 27 13:27:15 crc kubenswrapper[4786]: I0127 13:27:15.960531 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvdnv\" (UniqueName: \"kubernetes.io/projected/3e454e9d-7129-498a-8d93-ae39b37a8f6b-kube-api-access-fvdnv\") pod \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\" (UID: \"3e454e9d-7129-498a-8d93-ae39b37a8f6b\") " Jan 27 13:27:15 crc kubenswrapper[4786]: I0127 13:27:15.980164 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e454e9d-7129-498a-8d93-ae39b37a8f6b-kube-api-access-fvdnv" (OuterVolumeSpecName: "kube-api-access-fvdnv") pod "3e454e9d-7129-498a-8d93-ae39b37a8f6b" (UID: "3e454e9d-7129-498a-8d93-ae39b37a8f6b"). InnerVolumeSpecName "kube-api-access-fvdnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:27:15 crc kubenswrapper[4786]: I0127 13:27:15.980639 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e454e9d-7129-498a-8d93-ae39b37a8f6b" (UID: "3e454e9d-7129-498a-8d93-ae39b37a8f6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:27:15 crc kubenswrapper[4786]: I0127 13:27:15.980914 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-config-data" (OuterVolumeSpecName: "config-data") pod "3e454e9d-7129-498a-8d93-ae39b37a8f6b" (UID: "3e454e9d-7129-498a-8d93-ae39b37a8f6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:27:15 crc kubenswrapper[4786]: I0127 13:27:15.992376 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3e454e9d-7129-498a-8d93-ae39b37a8f6b" (UID: "3e454e9d-7129-498a-8d93-ae39b37a8f6b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:27:15 crc kubenswrapper[4786]: I0127 13:27:15.993223 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3e454e9d-7129-498a-8d93-ae39b37a8f6b" (UID: "3e454e9d-7129-498a-8d93-ae39b37a8f6b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:27:15 crc kubenswrapper[4786]: I0127 13:27:15.996772 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-scripts" (OuterVolumeSpecName: "scripts") pod "3e454e9d-7129-498a-8d93-ae39b37a8f6b" (UID: "3e454e9d-7129-498a-8d93-ae39b37a8f6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.061939 4786 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.062165 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.062228 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.062280 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvdnv\" (UniqueName: \"kubernetes.io/projected/3e454e9d-7129-498a-8d93-ae39b37a8f6b-kube-api-access-fvdnv\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.062348 4786 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.062477 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e454e9d-7129-498a-8d93-ae39b37a8f6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.602532 4786 generic.go:334] "Generic (PLEG): container finished" podID="78638470-5811-4d8d-9600-aa15f9e5baed" containerID="7119c06e61ccf463fe2783cf6be16c7f81ef95b684bbc210374ec440d95ead92" exitCode=0 Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.602624 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-fqbs4" event={"ID":"78638470-5811-4d8d-9600-aa15f9e5baed","Type":"ContainerDied","Data":"7119c06e61ccf463fe2783cf6be16c7f81ef95b684bbc210374ec440d95ead92"} Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.604035 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-4n2pp" event={"ID":"3e454e9d-7129-498a-8d93-ae39b37a8f6b","Type":"ContainerDied","Data":"439a33a98952b667b901cf80de142638bf82f3d565c19df48462824bb54c6ec5"} Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.604074 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="439a33a98952b667b901cf80de142638bf82f3d565c19df48462824bb54c6ec5" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.604080 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-4n2pp" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.703461 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-4n2pp"] Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.708982 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-4n2pp"] Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.785505 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-ghf44"] Jan 27 13:27:16 crc kubenswrapper[4786]: E0127 13:27:16.785967 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e454e9d-7129-498a-8d93-ae39b37a8f6b" containerName="keystone-bootstrap" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.785993 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e454e9d-7129-498a-8d93-ae39b37a8f6b" containerName="keystone-bootstrap" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.786172 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e454e9d-7129-498a-8d93-ae39b37a8f6b" containerName="keystone-bootstrap" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.786835 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.789716 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.789774 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-mjw8x" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.790256 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.790509 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.790511 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.805741 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-ghf44"] Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.974751 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-combined-ca-bundle\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.974813 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-scripts\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.974843 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-fernet-keys\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.974868 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppnf8\" (UniqueName: \"kubernetes.io/projected/be1e59cf-7674-4a16-be0e-bd08a540a304-kube-api-access-ppnf8\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.974893 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-config-data\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:16 crc kubenswrapper[4786]: I0127 13:27:16.974944 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-credential-keys\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.076229 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-credential-keys\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.076553 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-combined-ca-bundle\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.076706 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-scripts\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.076812 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-fernet-keys\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.076903 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppnf8\" (UniqueName: \"kubernetes.io/projected/be1e59cf-7674-4a16-be0e-bd08a540a304-kube-api-access-ppnf8\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.077015 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-config-data\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.080529 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-fernet-keys\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.080642 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-credential-keys\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.080819 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-config-data\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.080870 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-combined-ca-bundle\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.082314 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-scripts\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.099091 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppnf8\" (UniqueName: \"kubernetes.io/projected/be1e59cf-7674-4a16-be0e-bd08a540a304-kube-api-access-ppnf8\") pod \"keystone-bootstrap-ghf44\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.106216 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.474234 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e454e9d-7129-498a-8d93-ae39b37a8f6b" path="/var/lib/kubelet/pods/3e454e9d-7129-498a-8d93-ae39b37a8f6b/volumes" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.561098 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-ghf44"] Jan 27 13:27:17 crc kubenswrapper[4786]: W0127 13:27:17.567320 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe1e59cf_7674_4a16_be0e_bd08a540a304.slice/crio-761b94098b8a25773110981e79dd7be6dd1f331036c123acebe7bba77f208863 WatchSource:0}: Error finding container 761b94098b8a25773110981e79dd7be6dd1f331036c123acebe7bba77f208863: Status 404 returned error can't find the container with id 761b94098b8a25773110981e79dd7be6dd1f331036c123acebe7bba77f208863 Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.611677 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-ghf44" event={"ID":"be1e59cf-7674-4a16-be0e-bd08a540a304","Type":"ContainerStarted","Data":"761b94098b8a25773110981e79dd7be6dd1f331036c123acebe7bba77f208863"} Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.821743 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.888052 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-scripts\") pod \"78638470-5811-4d8d-9600-aa15f9e5baed\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.888096 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78638470-5811-4d8d-9600-aa15f9e5baed-logs\") pod \"78638470-5811-4d8d-9600-aa15f9e5baed\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.888113 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-config-data\") pod \"78638470-5811-4d8d-9600-aa15f9e5baed\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.888168 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-combined-ca-bundle\") pod \"78638470-5811-4d8d-9600-aa15f9e5baed\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.888193 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx74n\" (UniqueName: \"kubernetes.io/projected/78638470-5811-4d8d-9600-aa15f9e5baed-kube-api-access-hx74n\") pod \"78638470-5811-4d8d-9600-aa15f9e5baed\" (UID: \"78638470-5811-4d8d-9600-aa15f9e5baed\") " Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.888550 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78638470-5811-4d8d-9600-aa15f9e5baed-logs" (OuterVolumeSpecName: "logs") pod "78638470-5811-4d8d-9600-aa15f9e5baed" (UID: "78638470-5811-4d8d-9600-aa15f9e5baed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.892522 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78638470-5811-4d8d-9600-aa15f9e5baed-kube-api-access-hx74n" (OuterVolumeSpecName: "kube-api-access-hx74n") pod "78638470-5811-4d8d-9600-aa15f9e5baed" (UID: "78638470-5811-4d8d-9600-aa15f9e5baed"). InnerVolumeSpecName "kube-api-access-hx74n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.896653 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-scripts" (OuterVolumeSpecName: "scripts") pod "78638470-5811-4d8d-9600-aa15f9e5baed" (UID: "78638470-5811-4d8d-9600-aa15f9e5baed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.909653 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78638470-5811-4d8d-9600-aa15f9e5baed" (UID: "78638470-5811-4d8d-9600-aa15f9e5baed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.909442 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-config-data" (OuterVolumeSpecName: "config-data") pod "78638470-5811-4d8d-9600-aa15f9e5baed" (UID: "78638470-5811-4d8d-9600-aa15f9e5baed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.990156 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78638470-5811-4d8d-9600-aa15f9e5baed-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.990703 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.990808 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.990822 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx74n\" (UniqueName: \"kubernetes.io/projected/78638470-5811-4d8d-9600-aa15f9e5baed-kube-api-access-hx74n\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:17 crc kubenswrapper[4786]: I0127 13:27:17.990835 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78638470-5811-4d8d-9600-aa15f9e5baed-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.619708 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-fqbs4" event={"ID":"78638470-5811-4d8d-9600-aa15f9e5baed","Type":"ContainerDied","Data":"cf6932ad2011122b8afa4c82b4e50aac30c3514476665e009ea970e324142d09"} Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.619752 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf6932ad2011122b8afa4c82b4e50aac30c3514476665e009ea970e324142d09" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.619782 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-fqbs4" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.621104 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-ghf44" event={"ID":"be1e59cf-7674-4a16-be0e-bd08a540a304","Type":"ContainerStarted","Data":"edaa212c1fe3b94e8eec53e84808cad2829913db09ca5e4f630433b24972d1b6"} Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.655757 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-ghf44" podStartSLOduration=2.655734475 podStartE2EDuration="2.655734475s" podCreationTimestamp="2026-01-27 13:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:27:18.643125148 +0000 UTC m=+1221.853739267" watchObservedRunningTime="2026-01-27 13:27:18.655734475 +0000 UTC m=+1221.866348594" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.709545 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-57fbd5dfd8-mlllb"] Jan 27 13:27:18 crc kubenswrapper[4786]: E0127 13:27:18.709923 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78638470-5811-4d8d-9600-aa15f9e5baed" containerName="placement-db-sync" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.709945 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="78638470-5811-4d8d-9600-aa15f9e5baed" containerName="placement-db-sync" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.710133 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="78638470-5811-4d8d-9600-aa15f9e5baed" containerName="placement-db-sync" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.711098 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.715803 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.715915 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.716447 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-dn4dp" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.725037 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-57fbd5dfd8-mlllb"] Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.804413 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e2844b5-b7ec-43fd-873a-6cdaa879c676-combined-ca-bundle\") pod \"placement-57fbd5dfd8-mlllb\" (UID: \"9e2844b5-b7ec-43fd-873a-6cdaa879c676\") " pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.804478 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkv74\" (UniqueName: \"kubernetes.io/projected/9e2844b5-b7ec-43fd-873a-6cdaa879c676-kube-api-access-zkv74\") pod \"placement-57fbd5dfd8-mlllb\" (UID: \"9e2844b5-b7ec-43fd-873a-6cdaa879c676\") " pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.804556 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e2844b5-b7ec-43fd-873a-6cdaa879c676-scripts\") pod \"placement-57fbd5dfd8-mlllb\" (UID: \"9e2844b5-b7ec-43fd-873a-6cdaa879c676\") " pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.804636 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e2844b5-b7ec-43fd-873a-6cdaa879c676-logs\") pod \"placement-57fbd5dfd8-mlllb\" (UID: \"9e2844b5-b7ec-43fd-873a-6cdaa879c676\") " pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.804673 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e2844b5-b7ec-43fd-873a-6cdaa879c676-config-data\") pod \"placement-57fbd5dfd8-mlllb\" (UID: \"9e2844b5-b7ec-43fd-873a-6cdaa879c676\") " pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.905578 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e2844b5-b7ec-43fd-873a-6cdaa879c676-combined-ca-bundle\") pod \"placement-57fbd5dfd8-mlllb\" (UID: \"9e2844b5-b7ec-43fd-873a-6cdaa879c676\") " pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.905657 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkv74\" (UniqueName: \"kubernetes.io/projected/9e2844b5-b7ec-43fd-873a-6cdaa879c676-kube-api-access-zkv74\") pod \"placement-57fbd5dfd8-mlllb\" (UID: \"9e2844b5-b7ec-43fd-873a-6cdaa879c676\") " pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.905684 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e2844b5-b7ec-43fd-873a-6cdaa879c676-scripts\") pod \"placement-57fbd5dfd8-mlllb\" (UID: \"9e2844b5-b7ec-43fd-873a-6cdaa879c676\") " pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.905729 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e2844b5-b7ec-43fd-873a-6cdaa879c676-logs\") pod \"placement-57fbd5dfd8-mlllb\" (UID: \"9e2844b5-b7ec-43fd-873a-6cdaa879c676\") " pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.905764 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e2844b5-b7ec-43fd-873a-6cdaa879c676-config-data\") pod \"placement-57fbd5dfd8-mlllb\" (UID: \"9e2844b5-b7ec-43fd-873a-6cdaa879c676\") " pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.906327 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e2844b5-b7ec-43fd-873a-6cdaa879c676-logs\") pod \"placement-57fbd5dfd8-mlllb\" (UID: \"9e2844b5-b7ec-43fd-873a-6cdaa879c676\") " pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.911196 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e2844b5-b7ec-43fd-873a-6cdaa879c676-scripts\") pod \"placement-57fbd5dfd8-mlllb\" (UID: \"9e2844b5-b7ec-43fd-873a-6cdaa879c676\") " pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.919283 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e2844b5-b7ec-43fd-873a-6cdaa879c676-combined-ca-bundle\") pod \"placement-57fbd5dfd8-mlllb\" (UID: \"9e2844b5-b7ec-43fd-873a-6cdaa879c676\") " pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.919333 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e2844b5-b7ec-43fd-873a-6cdaa879c676-config-data\") pod \"placement-57fbd5dfd8-mlllb\" (UID: \"9e2844b5-b7ec-43fd-873a-6cdaa879c676\") " pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:18 crc kubenswrapper[4786]: I0127 13:27:18.933767 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkv74\" (UniqueName: \"kubernetes.io/projected/9e2844b5-b7ec-43fd-873a-6cdaa879c676-kube-api-access-zkv74\") pod \"placement-57fbd5dfd8-mlllb\" (UID: \"9e2844b5-b7ec-43fd-873a-6cdaa879c676\") " pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:19 crc kubenswrapper[4786]: I0127 13:27:19.030201 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:19 crc kubenswrapper[4786]: I0127 13:27:19.488635 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-57fbd5dfd8-mlllb"] Jan 27 13:27:19 crc kubenswrapper[4786]: W0127 13:27:19.492439 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e2844b5_b7ec_43fd_873a_6cdaa879c676.slice/crio-682954df774c7a85bf9f3ebc62b4ff4ec860f8d183b3e349af4f6350f8f85350 WatchSource:0}: Error finding container 682954df774c7a85bf9f3ebc62b4ff4ec860f8d183b3e349af4f6350f8f85350: Status 404 returned error can't find the container with id 682954df774c7a85bf9f3ebc62b4ff4ec860f8d183b3e349af4f6350f8f85350 Jan 27 13:27:19 crc kubenswrapper[4786]: I0127 13:27:19.632723 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" event={"ID":"9e2844b5-b7ec-43fd-873a-6cdaa879c676","Type":"ContainerStarted","Data":"682954df774c7a85bf9f3ebc62b4ff4ec860f8d183b3e349af4f6350f8f85350"} Jan 27 13:27:20 crc kubenswrapper[4786]: I0127 13:27:20.639158 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" event={"ID":"9e2844b5-b7ec-43fd-873a-6cdaa879c676","Type":"ContainerStarted","Data":"d0a2d146456584f950f2554909f6629814c9c959a5f921c402b87ad944a623c5"} Jan 27 13:27:20 crc kubenswrapper[4786]: I0127 13:27:20.639462 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" event={"ID":"9e2844b5-b7ec-43fd-873a-6cdaa879c676","Type":"ContainerStarted","Data":"61280cb66a140e61f3b2ab5b9480e91cf60146ad87acf8c961e211f111cce256"} Jan 27 13:27:20 crc kubenswrapper[4786]: I0127 13:27:20.639524 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:20 crc kubenswrapper[4786]: I0127 13:27:20.639550 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:20 crc kubenswrapper[4786]: I0127 13:27:20.659465 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" podStartSLOduration=2.659441195 podStartE2EDuration="2.659441195s" podCreationTimestamp="2026-01-27 13:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:27:20.654498329 +0000 UTC m=+1223.865112448" watchObservedRunningTime="2026-01-27 13:27:20.659441195 +0000 UTC m=+1223.870055314" Jan 27 13:27:21 crc kubenswrapper[4786]: I0127 13:27:21.649133 4786 generic.go:334] "Generic (PLEG): container finished" podID="be1e59cf-7674-4a16-be0e-bd08a540a304" containerID="edaa212c1fe3b94e8eec53e84808cad2829913db09ca5e4f630433b24972d1b6" exitCode=0 Jan 27 13:27:21 crc kubenswrapper[4786]: I0127 13:27:21.649215 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-ghf44" event={"ID":"be1e59cf-7674-4a16-be0e-bd08a540a304","Type":"ContainerDied","Data":"edaa212c1fe3b94e8eec53e84808cad2829913db09ca5e4f630433b24972d1b6"} Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.034235 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.184886 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-fernet-keys\") pod \"be1e59cf-7674-4a16-be0e-bd08a540a304\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.185047 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-scripts\") pod \"be1e59cf-7674-4a16-be0e-bd08a540a304\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.185086 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-combined-ca-bundle\") pod \"be1e59cf-7674-4a16-be0e-bd08a540a304\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.185135 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppnf8\" (UniqueName: \"kubernetes.io/projected/be1e59cf-7674-4a16-be0e-bd08a540a304-kube-api-access-ppnf8\") pod \"be1e59cf-7674-4a16-be0e-bd08a540a304\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.185169 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-config-data\") pod \"be1e59cf-7674-4a16-be0e-bd08a540a304\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.185190 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-credential-keys\") pod \"be1e59cf-7674-4a16-be0e-bd08a540a304\" (UID: \"be1e59cf-7674-4a16-be0e-bd08a540a304\") " Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.190862 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "be1e59cf-7674-4a16-be0e-bd08a540a304" (UID: "be1e59cf-7674-4a16-be0e-bd08a540a304"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.191725 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "be1e59cf-7674-4a16-be0e-bd08a540a304" (UID: "be1e59cf-7674-4a16-be0e-bd08a540a304"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.191745 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be1e59cf-7674-4a16-be0e-bd08a540a304-kube-api-access-ppnf8" (OuterVolumeSpecName: "kube-api-access-ppnf8") pod "be1e59cf-7674-4a16-be0e-bd08a540a304" (UID: "be1e59cf-7674-4a16-be0e-bd08a540a304"). InnerVolumeSpecName "kube-api-access-ppnf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.191870 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-scripts" (OuterVolumeSpecName: "scripts") pod "be1e59cf-7674-4a16-be0e-bd08a540a304" (UID: "be1e59cf-7674-4a16-be0e-bd08a540a304"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.210864 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-config-data" (OuterVolumeSpecName: "config-data") pod "be1e59cf-7674-4a16-be0e-bd08a540a304" (UID: "be1e59cf-7674-4a16-be0e-bd08a540a304"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.215723 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be1e59cf-7674-4a16-be0e-bd08a540a304" (UID: "be1e59cf-7674-4a16-be0e-bd08a540a304"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.286941 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.287245 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.287257 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppnf8\" (UniqueName: \"kubernetes.io/projected/be1e59cf-7674-4a16-be0e-bd08a540a304-kube-api-access-ppnf8\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.287267 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.287277 4786 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.287294 4786 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be1e59cf-7674-4a16-be0e-bd08a540a304-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.674900 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-ghf44" event={"ID":"be1e59cf-7674-4a16-be0e-bd08a540a304","Type":"ContainerDied","Data":"761b94098b8a25773110981e79dd7be6dd1f331036c123acebe7bba77f208863"} Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.675146 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="761b94098b8a25773110981e79dd7be6dd1f331036c123acebe7bba77f208863" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.675327 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-ghf44" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.731899 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-8567ddf8f4-cxtk8"] Jan 27 13:27:23 crc kubenswrapper[4786]: E0127 13:27:23.732522 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be1e59cf-7674-4a16-be0e-bd08a540a304" containerName="keystone-bootstrap" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.732665 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="be1e59cf-7674-4a16-be0e-bd08a540a304" containerName="keystone-bootstrap" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.732954 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="be1e59cf-7674-4a16-be0e-bd08a540a304" containerName="keystone-bootstrap" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.733714 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.740269 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-mjw8x" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.740521 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.741787 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.741902 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.742206 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-8567ddf8f4-cxtk8"] Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.896128 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmxrp\" (UniqueName: \"kubernetes.io/projected/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-kube-api-access-cmxrp\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.896215 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-credential-keys\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.896246 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-combined-ca-bundle\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.896337 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-scripts\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.896357 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-fernet-keys\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.896376 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-config-data\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.997673 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-scripts\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.997738 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-fernet-keys\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.997766 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-config-data\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.997806 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmxrp\" (UniqueName: \"kubernetes.io/projected/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-kube-api-access-cmxrp\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.997841 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-credential-keys\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:23 crc kubenswrapper[4786]: I0127 13:27:23.997873 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-combined-ca-bundle\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:24 crc kubenswrapper[4786]: I0127 13:27:24.002041 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-scripts\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:24 crc kubenswrapper[4786]: I0127 13:27:24.002309 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-fernet-keys\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:24 crc kubenswrapper[4786]: I0127 13:27:24.002322 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-combined-ca-bundle\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:24 crc kubenswrapper[4786]: I0127 13:27:24.002448 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-config-data\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:24 crc kubenswrapper[4786]: I0127 13:27:24.003224 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-credential-keys\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:24 crc kubenswrapper[4786]: I0127 13:27:24.013425 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmxrp\" (UniqueName: \"kubernetes.io/projected/3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb-kube-api-access-cmxrp\") pod \"keystone-8567ddf8f4-cxtk8\" (UID: \"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb\") " pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:24 crc kubenswrapper[4786]: I0127 13:27:24.049488 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:24 crc kubenswrapper[4786]: I0127 13:27:24.464854 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-8567ddf8f4-cxtk8"] Jan 27 13:27:24 crc kubenswrapper[4786]: W0127 13:27:24.468946 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c6e6e93_d5d9_4b2c_a285_8fd57f9994eb.slice/crio-852ab6d4c81e3773e6c680adcc529759e1d55241a40a1607af8c9ec0c13d2e80 WatchSource:0}: Error finding container 852ab6d4c81e3773e6c680adcc529759e1d55241a40a1607af8c9ec0c13d2e80: Status 404 returned error can't find the container with id 852ab6d4c81e3773e6c680adcc529759e1d55241a40a1607af8c9ec0c13d2e80 Jan 27 13:27:24 crc kubenswrapper[4786]: I0127 13:27:24.682151 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" event={"ID":"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb","Type":"ContainerStarted","Data":"b610352d50ffa6bbd1362d5e76d0bdd279d4a278e8dfbc4514ebb183ad71c424"} Jan 27 13:27:24 crc kubenswrapper[4786]: I0127 13:27:24.682193 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" event={"ID":"3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb","Type":"ContainerStarted","Data":"852ab6d4c81e3773e6c680adcc529759e1d55241a40a1607af8c9ec0c13d2e80"} Jan 27 13:27:24 crc kubenswrapper[4786]: I0127 13:27:24.682285 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:24 crc kubenswrapper[4786]: I0127 13:27:24.704370 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" podStartSLOduration=1.7043494890000002 podStartE2EDuration="1.704349489s" podCreationTimestamp="2026-01-27 13:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:27:24.702444647 +0000 UTC m=+1227.913058766" watchObservedRunningTime="2026-01-27 13:27:24.704349489 +0000 UTC m=+1227.914963628" Jan 27 13:27:50 crc kubenswrapper[4786]: I0127 13:27:50.150131 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:51 crc kubenswrapper[4786]: I0127 13:27:51.158646 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-57fbd5dfd8-mlllb" Jan 27 13:27:55 crc kubenswrapper[4786]: I0127 13:27:55.571013 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/keystone-8567ddf8f4-cxtk8" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.317012 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.318196 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.321056 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.321073 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-config-secret" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.321162 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstackclient-openstackclient-dockercfg-zb4t5" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.331990 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.411193 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ebe93b02-f04c-48d1-8f5f-68e113379180-openstack-config\") pod \"openstackclient\" (UID: \"ebe93b02-f04c-48d1-8f5f-68e113379180\") " pod="nova-kuttl-default/openstackclient" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.411313 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe93b02-f04c-48d1-8f5f-68e113379180-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ebe93b02-f04c-48d1-8f5f-68e113379180\") " pod="nova-kuttl-default/openstackclient" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.411365 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ebe93b02-f04c-48d1-8f5f-68e113379180-openstack-config-secret\") pod \"openstackclient\" (UID: \"ebe93b02-f04c-48d1-8f5f-68e113379180\") " pod="nova-kuttl-default/openstackclient" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.411398 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvn94\" (UniqueName: \"kubernetes.io/projected/ebe93b02-f04c-48d1-8f5f-68e113379180-kube-api-access-pvn94\") pod \"openstackclient\" (UID: \"ebe93b02-f04c-48d1-8f5f-68e113379180\") " pod="nova-kuttl-default/openstackclient" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.512646 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ebe93b02-f04c-48d1-8f5f-68e113379180-openstack-config\") pod \"openstackclient\" (UID: \"ebe93b02-f04c-48d1-8f5f-68e113379180\") " pod="nova-kuttl-default/openstackclient" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.512752 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe93b02-f04c-48d1-8f5f-68e113379180-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ebe93b02-f04c-48d1-8f5f-68e113379180\") " pod="nova-kuttl-default/openstackclient" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.512802 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ebe93b02-f04c-48d1-8f5f-68e113379180-openstack-config-secret\") pod \"openstackclient\" (UID: \"ebe93b02-f04c-48d1-8f5f-68e113379180\") " pod="nova-kuttl-default/openstackclient" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.512837 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvn94\" (UniqueName: \"kubernetes.io/projected/ebe93b02-f04c-48d1-8f5f-68e113379180-kube-api-access-pvn94\") pod \"openstackclient\" (UID: \"ebe93b02-f04c-48d1-8f5f-68e113379180\") " pod="nova-kuttl-default/openstackclient" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.513947 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ebe93b02-f04c-48d1-8f5f-68e113379180-openstack-config\") pod \"openstackclient\" (UID: \"ebe93b02-f04c-48d1-8f5f-68e113379180\") " pod="nova-kuttl-default/openstackclient" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.518176 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ebe93b02-f04c-48d1-8f5f-68e113379180-openstack-config-secret\") pod \"openstackclient\" (UID: \"ebe93b02-f04c-48d1-8f5f-68e113379180\") " pod="nova-kuttl-default/openstackclient" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.518227 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe93b02-f04c-48d1-8f5f-68e113379180-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ebe93b02-f04c-48d1-8f5f-68e113379180\") " pod="nova-kuttl-default/openstackclient" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.534031 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvn94\" (UniqueName: \"kubernetes.io/projected/ebe93b02-f04c-48d1-8f5f-68e113379180-kube-api-access-pvn94\") pod \"openstackclient\" (UID: \"ebe93b02-f04c-48d1-8f5f-68e113379180\") " pod="nova-kuttl-default/openstackclient" Jan 27 13:27:58 crc kubenswrapper[4786]: I0127 13:27:58.635921 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 27 13:27:59 crc kubenswrapper[4786]: I0127 13:27:59.084746 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 27 13:27:59 crc kubenswrapper[4786]: I0127 13:27:59.723129 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"ebe93b02-f04c-48d1-8f5f-68e113379180","Type":"ContainerStarted","Data":"2afed29c6fdae2ba6a7caf075c32bbd2b1e5c71a796a14d0de7ded19ddd26965"} Jan 27 13:28:07 crc kubenswrapper[4786]: I0127 13:28:07.795028 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"ebe93b02-f04c-48d1-8f5f-68e113379180","Type":"ContainerStarted","Data":"bf62bc59506dc8b78ec757582d2b26f4f41f94c040d1441b0b48c94bd31e0de8"} Jan 27 13:28:07 crc kubenswrapper[4786]: I0127 13:28:07.810336 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstackclient" podStartSLOduration=2.162093141 podStartE2EDuration="9.810309067s" podCreationTimestamp="2026-01-27 13:27:58 +0000 UTC" firstStartedPulling="2026-01-27 13:27:59.090318748 +0000 UTC m=+1262.300932867" lastFinishedPulling="2026-01-27 13:28:06.738534654 +0000 UTC m=+1269.949148793" observedRunningTime="2026-01-27 13:28:07.807773108 +0000 UTC m=+1271.018387287" watchObservedRunningTime="2026-01-27 13:28:07.810309067 +0000 UTC m=+1271.020923196" Jan 27 13:28:14 crc kubenswrapper[4786]: I0127 13:28:14.736871 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5"] Jan 27 13:28:14 crc kubenswrapper[4786]: I0127 13:28:14.737698 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" podUID="1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43" containerName="manager" containerID="cri-o://206bdccfb8f90499d495acca02e13de635bf3c8ca16e54032fd3e46e47c8d1bc" gracePeriod=10 Jan 27 13:28:14 crc kubenswrapper[4786]: I0127 13:28:14.827504 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48"] Jan 27 13:28:14 crc kubenswrapper[4786]: I0127 13:28:14.827769 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48" podUID="6be126f6-6357-4e54-b9a3-7f6a996bdc0c" containerName="operator" containerID="cri-o://77fe1abbfd88d4bdf51a8270e582074210a32cabb3b31931d8631e758bb48fb6" gracePeriod=10 Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.172909 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.296431 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqzlg\" (UniqueName: \"kubernetes.io/projected/1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43-kube-api-access-nqzlg\") pod \"1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43\" (UID: \"1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43\") " Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.303798 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43-kube-api-access-nqzlg" (OuterVolumeSpecName: "kube-api-access-nqzlg") pod "1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43" (UID: "1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43"). InnerVolumeSpecName "kube-api-access-nqzlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.307557 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.391997 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-r2449"] Jan 27 13:28:15 crc kubenswrapper[4786]: E0127 13:28:15.392639 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43" containerName="manager" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.392663 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43" containerName="manager" Jan 27 13:28:15 crc kubenswrapper[4786]: E0127 13:28:15.392674 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be126f6-6357-4e54-b9a3-7f6a996bdc0c" containerName="operator" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.392683 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be126f6-6357-4e54-b9a3-7f6a996bdc0c" containerName="operator" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.392864 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="6be126f6-6357-4e54-b9a3-7f6a996bdc0c" containerName="operator" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.392888 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43" containerName="manager" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.393637 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-r2449" Jan 27 13:28:15 crc kubenswrapper[4786]: W0127 13:28:15.398409 4786 reflector.go:561] object-"openstack-operators"/"nova-operator-index-dockercfg-99sxb": failed to list *v1.Secret: secrets "nova-operator-index-dockercfg-99sxb" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Jan 27 13:28:15 crc kubenswrapper[4786]: E0127 13:28:15.398493 4786 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"nova-operator-index-dockercfg-99sxb\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-operator-index-dockercfg-99sxb\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.400067 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n8dj\" (UniqueName: \"kubernetes.io/projected/6be126f6-6357-4e54-b9a3-7f6a996bdc0c-kube-api-access-9n8dj\") pod \"6be126f6-6357-4e54-b9a3-7f6a996bdc0c\" (UID: \"6be126f6-6357-4e54-b9a3-7f6a996bdc0c\") " Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.400679 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqzlg\" (UniqueName: \"kubernetes.io/projected/1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43-kube-api-access-nqzlg\") on node \"crc\" DevicePath \"\"" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.412819 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be126f6-6357-4e54-b9a3-7f6a996bdc0c-kube-api-access-9n8dj" (OuterVolumeSpecName: "kube-api-access-9n8dj") pod "6be126f6-6357-4e54-b9a3-7f6a996bdc0c" (UID: "6be126f6-6357-4e54-b9a3-7f6a996bdc0c"). InnerVolumeSpecName "kube-api-access-9n8dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.422628 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-r2449"] Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.504886 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj89p\" (UniqueName: \"kubernetes.io/projected/da0a7696-fe5a-40ea-beb5-1b42af498812-kube-api-access-xj89p\") pod \"nova-operator-index-r2449\" (UID: \"da0a7696-fe5a-40ea-beb5-1b42af498812\") " pod="openstack-operators/nova-operator-index-r2449" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.504988 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n8dj\" (UniqueName: \"kubernetes.io/projected/6be126f6-6357-4e54-b9a3-7f6a996bdc0c-kube-api-access-9n8dj\") on node \"crc\" DevicePath \"\"" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.606795 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj89p\" (UniqueName: \"kubernetes.io/projected/da0a7696-fe5a-40ea-beb5-1b42af498812-kube-api-access-xj89p\") pod \"nova-operator-index-r2449\" (UID: \"da0a7696-fe5a-40ea-beb5-1b42af498812\") " pod="openstack-operators/nova-operator-index-r2449" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.625642 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj89p\" (UniqueName: \"kubernetes.io/projected/da0a7696-fe5a-40ea-beb5-1b42af498812-kube-api-access-xj89p\") pod \"nova-operator-index-r2449\" (UID: \"da0a7696-fe5a-40ea-beb5-1b42af498812\") " pod="openstack-operators/nova-operator-index-r2449" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.854890 4786 generic.go:334] "Generic (PLEG): container finished" podID="6be126f6-6357-4e54-b9a3-7f6a996bdc0c" containerID="77fe1abbfd88d4bdf51a8270e582074210a32cabb3b31931d8631e758bb48fb6" exitCode=0 Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.854939 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.854973 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48" event={"ID":"6be126f6-6357-4e54-b9a3-7f6a996bdc0c","Type":"ContainerDied","Data":"77fe1abbfd88d4bdf51a8270e582074210a32cabb3b31931d8631e758bb48fb6"} Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.855015 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48" event={"ID":"6be126f6-6357-4e54-b9a3-7f6a996bdc0c","Type":"ContainerDied","Data":"66f03e3125ca6e3aa79c0833c477b170e62744cb43fc3a68ea1f23a4a5f6c8f8"} Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.855036 4786 scope.go:117] "RemoveContainer" containerID="77fe1abbfd88d4bdf51a8270e582074210a32cabb3b31931d8631e758bb48fb6" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.856501 4786 generic.go:334] "Generic (PLEG): container finished" podID="1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43" containerID="206bdccfb8f90499d495acca02e13de635bf3c8ca16e54032fd3e46e47c8d1bc" exitCode=0 Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.856538 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" event={"ID":"1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43","Type":"ContainerDied","Data":"206bdccfb8f90499d495acca02e13de635bf3c8ca16e54032fd3e46e47c8d1bc"} Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.856559 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.856582 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5" event={"ID":"1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43","Type":"ContainerDied","Data":"b7fbe6e64c168fb73cdff58a4c58439c1688d928816252643b3db07ef4ebb6f5"} Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.894815 4786 scope.go:117] "RemoveContainer" containerID="77fe1abbfd88d4bdf51a8270e582074210a32cabb3b31931d8631e758bb48fb6" Jan 27 13:28:15 crc kubenswrapper[4786]: E0127 13:28:15.895292 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77fe1abbfd88d4bdf51a8270e582074210a32cabb3b31931d8631e758bb48fb6\": container with ID starting with 77fe1abbfd88d4bdf51a8270e582074210a32cabb3b31931d8631e758bb48fb6 not found: ID does not exist" containerID="77fe1abbfd88d4bdf51a8270e582074210a32cabb3b31931d8631e758bb48fb6" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.895371 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77fe1abbfd88d4bdf51a8270e582074210a32cabb3b31931d8631e758bb48fb6"} err="failed to get container status \"77fe1abbfd88d4bdf51a8270e582074210a32cabb3b31931d8631e758bb48fb6\": rpc error: code = NotFound desc = could not find container \"77fe1abbfd88d4bdf51a8270e582074210a32cabb3b31931d8631e758bb48fb6\": container with ID starting with 77fe1abbfd88d4bdf51a8270e582074210a32cabb3b31931d8631e758bb48fb6 not found: ID does not exist" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.895400 4786 scope.go:117] "RemoveContainer" containerID="206bdccfb8f90499d495acca02e13de635bf3c8ca16e54032fd3e46e47c8d1bc" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.909867 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5"] Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.911798 4786 scope.go:117] "RemoveContainer" containerID="206bdccfb8f90499d495acca02e13de635bf3c8ca16e54032fd3e46e47c8d1bc" Jan 27 13:28:15 crc kubenswrapper[4786]: E0127 13:28:15.912246 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"206bdccfb8f90499d495acca02e13de635bf3c8ca16e54032fd3e46e47c8d1bc\": container with ID starting with 206bdccfb8f90499d495acca02e13de635bf3c8ca16e54032fd3e46e47c8d1bc not found: ID does not exist" containerID="206bdccfb8f90499d495acca02e13de635bf3c8ca16e54032fd3e46e47c8d1bc" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.912279 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"206bdccfb8f90499d495acca02e13de635bf3c8ca16e54032fd3e46e47c8d1bc"} err="failed to get container status \"206bdccfb8f90499d495acca02e13de635bf3c8ca16e54032fd3e46e47c8d1bc\": rpc error: code = NotFound desc = could not find container \"206bdccfb8f90499d495acca02e13de635bf3c8ca16e54032fd3e46e47c8d1bc\": container with ID starting with 206bdccfb8f90499d495acca02e13de635bf3c8ca16e54032fd3e46e47c8d1bc not found: ID does not exist" Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.916737 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6cffd64fd8-cgnm5"] Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.922342 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48"] Jan 27 13:28:15 crc kubenswrapper[4786]: I0127 13:28:15.928324 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8557b66c-vtv48"] Jan 27 13:28:16 crc kubenswrapper[4786]: I0127 13:28:16.458022 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-index-dockercfg-99sxb" Jan 27 13:28:16 crc kubenswrapper[4786]: I0127 13:28:16.458200 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-r2449" Jan 27 13:28:16 crc kubenswrapper[4786]: I0127 13:28:16.915917 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-r2449"] Jan 27 13:28:17 crc kubenswrapper[4786]: I0127 13:28:17.477315 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43" path="/var/lib/kubelet/pods/1d8c2e75-6d30-4c9f-9f3f-82bda48f6d43/volumes" Jan 27 13:28:17 crc kubenswrapper[4786]: I0127 13:28:17.478221 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6be126f6-6357-4e54-b9a3-7f6a996bdc0c" path="/var/lib/kubelet/pods/6be126f6-6357-4e54-b9a3-7f6a996bdc0c/volumes" Jan 27 13:28:17 crc kubenswrapper[4786]: I0127 13:28:17.830866 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-index-r2449"] Jan 27 13:28:17 crc kubenswrapper[4786]: I0127 13:28:17.892423 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-r2449" event={"ID":"da0a7696-fe5a-40ea-beb5-1b42af498812","Type":"ContainerStarted","Data":"e3bcd51ee711668413ff1066dfe513cb5d211643d2431d6ec077cb9a367572b1"} Jan 27 13:28:17 crc kubenswrapper[4786]: I0127 13:28:17.892472 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-r2449" event={"ID":"da0a7696-fe5a-40ea-beb5-1b42af498812","Type":"ContainerStarted","Data":"9abac97ebddae06705eacf39b83b926e945de2472966bedbf0f6a4f76048ca07"} Jan 27 13:28:17 crc kubenswrapper[4786]: I0127 13:28:17.911955 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-r2449" podStartSLOduration=2.440627234 podStartE2EDuration="2.911933676s" podCreationTimestamp="2026-01-27 13:28:15 +0000 UTC" firstStartedPulling="2026-01-27 13:28:16.928596423 +0000 UTC m=+1280.139210552" lastFinishedPulling="2026-01-27 13:28:17.399902885 +0000 UTC m=+1280.610516994" observedRunningTime="2026-01-27 13:28:17.908152501 +0000 UTC m=+1281.118766610" watchObservedRunningTime="2026-01-27 13:28:17.911933676 +0000 UTC m=+1281.122547795" Jan 27 13:28:18 crc kubenswrapper[4786]: I0127 13:28:18.234593 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-pjw9n"] Jan 27 13:28:18 crc kubenswrapper[4786]: I0127 13:28:18.235734 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-pjw9n" Jan 27 13:28:18 crc kubenswrapper[4786]: I0127 13:28:18.246442 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-pjw9n"] Jan 27 13:28:18 crc kubenswrapper[4786]: I0127 13:28:18.347133 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45zn5\" (UniqueName: \"kubernetes.io/projected/198411ea-9abf-4fe2-b7bb-95be72d0aa84-kube-api-access-45zn5\") pod \"nova-operator-index-pjw9n\" (UID: \"198411ea-9abf-4fe2-b7bb-95be72d0aa84\") " pod="openstack-operators/nova-operator-index-pjw9n" Jan 27 13:28:18 crc kubenswrapper[4786]: I0127 13:28:18.449117 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45zn5\" (UniqueName: \"kubernetes.io/projected/198411ea-9abf-4fe2-b7bb-95be72d0aa84-kube-api-access-45zn5\") pod \"nova-operator-index-pjw9n\" (UID: \"198411ea-9abf-4fe2-b7bb-95be72d0aa84\") " pod="openstack-operators/nova-operator-index-pjw9n" Jan 27 13:28:18 crc kubenswrapper[4786]: I0127 13:28:18.472143 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45zn5\" (UniqueName: \"kubernetes.io/projected/198411ea-9abf-4fe2-b7bb-95be72d0aa84-kube-api-access-45zn5\") pod \"nova-operator-index-pjw9n\" (UID: \"198411ea-9abf-4fe2-b7bb-95be72d0aa84\") " pod="openstack-operators/nova-operator-index-pjw9n" Jan 27 13:28:18 crc kubenswrapper[4786]: I0127 13:28:18.554496 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-pjw9n" Jan 27 13:28:18 crc kubenswrapper[4786]: I0127 13:28:18.899918 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-index-r2449" podUID="da0a7696-fe5a-40ea-beb5-1b42af498812" containerName="registry-server" containerID="cri-o://e3bcd51ee711668413ff1066dfe513cb5d211643d2431d6ec077cb9a367572b1" gracePeriod=2 Jan 27 13:28:18 crc kubenswrapper[4786]: I0127 13:28:18.983302 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-pjw9n"] Jan 27 13:28:19 crc kubenswrapper[4786]: I0127 13:28:19.344044 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-r2449" Jan 27 13:28:19 crc kubenswrapper[4786]: I0127 13:28:19.464015 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj89p\" (UniqueName: \"kubernetes.io/projected/da0a7696-fe5a-40ea-beb5-1b42af498812-kube-api-access-xj89p\") pod \"da0a7696-fe5a-40ea-beb5-1b42af498812\" (UID: \"da0a7696-fe5a-40ea-beb5-1b42af498812\") " Jan 27 13:28:19 crc kubenswrapper[4786]: I0127 13:28:19.470409 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da0a7696-fe5a-40ea-beb5-1b42af498812-kube-api-access-xj89p" (OuterVolumeSpecName: "kube-api-access-xj89p") pod "da0a7696-fe5a-40ea-beb5-1b42af498812" (UID: "da0a7696-fe5a-40ea-beb5-1b42af498812"). InnerVolumeSpecName "kube-api-access-xj89p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:28:19 crc kubenswrapper[4786]: I0127 13:28:19.565942 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj89p\" (UniqueName: \"kubernetes.io/projected/da0a7696-fe5a-40ea-beb5-1b42af498812-kube-api-access-xj89p\") on node \"crc\" DevicePath \"\"" Jan 27 13:28:19 crc kubenswrapper[4786]: I0127 13:28:19.907467 4786 generic.go:334] "Generic (PLEG): container finished" podID="da0a7696-fe5a-40ea-beb5-1b42af498812" containerID="e3bcd51ee711668413ff1066dfe513cb5d211643d2431d6ec077cb9a367572b1" exitCode=0 Jan 27 13:28:19 crc kubenswrapper[4786]: I0127 13:28:19.907542 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-r2449" event={"ID":"da0a7696-fe5a-40ea-beb5-1b42af498812","Type":"ContainerDied","Data":"e3bcd51ee711668413ff1066dfe513cb5d211643d2431d6ec077cb9a367572b1"} Jan 27 13:28:19 crc kubenswrapper[4786]: I0127 13:28:19.907573 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-r2449" event={"ID":"da0a7696-fe5a-40ea-beb5-1b42af498812","Type":"ContainerDied","Data":"9abac97ebddae06705eacf39b83b926e945de2472966bedbf0f6a4f76048ca07"} Jan 27 13:28:19 crc kubenswrapper[4786]: I0127 13:28:19.907592 4786 scope.go:117] "RemoveContainer" containerID="e3bcd51ee711668413ff1066dfe513cb5d211643d2431d6ec077cb9a367572b1" Jan 27 13:28:19 crc kubenswrapper[4786]: I0127 13:28:19.907744 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-r2449" Jan 27 13:28:19 crc kubenswrapper[4786]: I0127 13:28:19.911165 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-pjw9n" event={"ID":"198411ea-9abf-4fe2-b7bb-95be72d0aa84","Type":"ContainerStarted","Data":"57c697492cd2bb6c6f183a96a26d4e6d5bbef5d47aface3bc8726d9b7e16f238"} Jan 27 13:28:19 crc kubenswrapper[4786]: I0127 13:28:19.911208 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-pjw9n" event={"ID":"198411ea-9abf-4fe2-b7bb-95be72d0aa84","Type":"ContainerStarted","Data":"acc38650cc600e0b29dc144a3a1d7d175b5a425984a501dc730097c07bc87ce7"} Jan 27 13:28:19 crc kubenswrapper[4786]: I0127 13:28:19.927953 4786 scope.go:117] "RemoveContainer" containerID="e3bcd51ee711668413ff1066dfe513cb5d211643d2431d6ec077cb9a367572b1" Jan 27 13:28:19 crc kubenswrapper[4786]: E0127 13:28:19.928757 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3bcd51ee711668413ff1066dfe513cb5d211643d2431d6ec077cb9a367572b1\": container with ID starting with e3bcd51ee711668413ff1066dfe513cb5d211643d2431d6ec077cb9a367572b1 not found: ID does not exist" containerID="e3bcd51ee711668413ff1066dfe513cb5d211643d2431d6ec077cb9a367572b1" Jan 27 13:28:19 crc kubenswrapper[4786]: I0127 13:28:19.928807 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3bcd51ee711668413ff1066dfe513cb5d211643d2431d6ec077cb9a367572b1"} err="failed to get container status \"e3bcd51ee711668413ff1066dfe513cb5d211643d2431d6ec077cb9a367572b1\": rpc error: code = NotFound desc = could not find container \"e3bcd51ee711668413ff1066dfe513cb5d211643d2431d6ec077cb9a367572b1\": container with ID starting with e3bcd51ee711668413ff1066dfe513cb5d211643d2431d6ec077cb9a367572b1 not found: ID does not exist" Jan 27 13:28:19 crc kubenswrapper[4786]: I0127 13:28:19.929917 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-index-r2449"] Jan 27 13:28:19 crc kubenswrapper[4786]: I0127 13:28:19.935870 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-index-r2449"] Jan 27 13:28:19 crc kubenswrapper[4786]: I0127 13:28:19.940244 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-pjw9n" podStartSLOduration=1.843691172 podStartE2EDuration="1.940204204s" podCreationTimestamp="2026-01-27 13:28:18 +0000 UTC" firstStartedPulling="2026-01-27 13:28:18.9906776 +0000 UTC m=+1282.201291719" lastFinishedPulling="2026-01-27 13:28:19.087190632 +0000 UTC m=+1282.297804751" observedRunningTime="2026-01-27 13:28:19.93972759 +0000 UTC m=+1283.150341729" watchObservedRunningTime="2026-01-27 13:28:19.940204204 +0000 UTC m=+1283.150818343" Jan 27 13:28:21 crc kubenswrapper[4786]: I0127 13:28:21.475668 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da0a7696-fe5a-40ea-beb5-1b42af498812" path="/var/lib/kubelet/pods/da0a7696-fe5a-40ea-beb5-1b42af498812/volumes" Jan 27 13:28:28 crc kubenswrapper[4786]: I0127 13:28:28.555376 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/nova-operator-index-pjw9n" Jan 27 13:28:28 crc kubenswrapper[4786]: I0127 13:28:28.555912 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-index-pjw9n" Jan 27 13:28:28 crc kubenswrapper[4786]: I0127 13:28:28.583123 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/nova-operator-index-pjw9n" Jan 27 13:28:29 crc kubenswrapper[4786]: I0127 13:28:29.000760 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-index-pjw9n" Jan 27 13:28:37 crc kubenswrapper[4786]: I0127 13:28:37.488123 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p"] Jan 27 13:28:37 crc kubenswrapper[4786]: E0127 13:28:37.489200 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da0a7696-fe5a-40ea-beb5-1b42af498812" containerName="registry-server" Jan 27 13:28:37 crc kubenswrapper[4786]: I0127 13:28:37.489216 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="da0a7696-fe5a-40ea-beb5-1b42af498812" containerName="registry-server" Jan 27 13:28:37 crc kubenswrapper[4786]: I0127 13:28:37.489380 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="da0a7696-fe5a-40ea-beb5-1b42af498812" containerName="registry-server" Jan 27 13:28:37 crc kubenswrapper[4786]: I0127 13:28:37.490691 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" Jan 27 13:28:37 crc kubenswrapper[4786]: I0127 13:28:37.500132 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-nv428" Jan 27 13:28:37 crc kubenswrapper[4786]: I0127 13:28:37.512454 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p"] Jan 27 13:28:37 crc kubenswrapper[4786]: I0127 13:28:37.555389 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgmx8\" (UniqueName: \"kubernetes.io/projected/3f10db36-4147-4749-8356-334a343efd90-kube-api-access-sgmx8\") pod \"85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p\" (UID: \"3f10db36-4147-4749-8356-334a343efd90\") " pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" Jan 27 13:28:37 crc kubenswrapper[4786]: I0127 13:28:37.555466 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f10db36-4147-4749-8356-334a343efd90-util\") pod \"85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p\" (UID: \"3f10db36-4147-4749-8356-334a343efd90\") " pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" Jan 27 13:28:37 crc kubenswrapper[4786]: I0127 13:28:37.555580 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f10db36-4147-4749-8356-334a343efd90-bundle\") pod \"85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p\" (UID: \"3f10db36-4147-4749-8356-334a343efd90\") " pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" Jan 27 13:28:37 crc kubenswrapper[4786]: I0127 13:28:37.656995 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgmx8\" (UniqueName: \"kubernetes.io/projected/3f10db36-4147-4749-8356-334a343efd90-kube-api-access-sgmx8\") pod \"85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p\" (UID: \"3f10db36-4147-4749-8356-334a343efd90\") " pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" Jan 27 13:28:37 crc kubenswrapper[4786]: I0127 13:28:37.657042 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f10db36-4147-4749-8356-334a343efd90-util\") pod \"85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p\" (UID: \"3f10db36-4147-4749-8356-334a343efd90\") " pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" Jan 27 13:28:37 crc kubenswrapper[4786]: I0127 13:28:37.657100 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f10db36-4147-4749-8356-334a343efd90-bundle\") pod \"85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p\" (UID: \"3f10db36-4147-4749-8356-334a343efd90\") " pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" Jan 27 13:28:37 crc kubenswrapper[4786]: I0127 13:28:37.657720 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f10db36-4147-4749-8356-334a343efd90-bundle\") pod \"85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p\" (UID: \"3f10db36-4147-4749-8356-334a343efd90\") " pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" Jan 27 13:28:37 crc kubenswrapper[4786]: I0127 13:28:37.657969 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f10db36-4147-4749-8356-334a343efd90-util\") pod \"85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p\" (UID: \"3f10db36-4147-4749-8356-334a343efd90\") " pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" Jan 27 13:28:37 crc kubenswrapper[4786]: I0127 13:28:37.684077 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgmx8\" (UniqueName: \"kubernetes.io/projected/3f10db36-4147-4749-8356-334a343efd90-kube-api-access-sgmx8\") pod \"85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p\" (UID: \"3f10db36-4147-4749-8356-334a343efd90\") " pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" Jan 27 13:28:37 crc kubenswrapper[4786]: I0127 13:28:37.815983 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" Jan 27 13:28:38 crc kubenswrapper[4786]: I0127 13:28:38.355861 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p"] Jan 27 13:28:38 crc kubenswrapper[4786]: W0127 13:28:38.362794 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f10db36_4147_4749_8356_334a343efd90.slice/crio-b42526c85c29c2d60145ee57cc31bf653117aeaa7d56be811e23650f20a21475 WatchSource:0}: Error finding container b42526c85c29c2d60145ee57cc31bf653117aeaa7d56be811e23650f20a21475: Status 404 returned error can't find the container with id b42526c85c29c2d60145ee57cc31bf653117aeaa7d56be811e23650f20a21475 Jan 27 13:28:39 crc kubenswrapper[4786]: I0127 13:28:39.071107 4786 generic.go:334] "Generic (PLEG): container finished" podID="3f10db36-4147-4749-8356-334a343efd90" containerID="ba5887eaae5c0f811b8fd92a2bcac6c669077ec0a6b8c1ef4b33afab99fc2359" exitCode=0 Jan 27 13:28:39 crc kubenswrapper[4786]: I0127 13:28:39.071150 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" event={"ID":"3f10db36-4147-4749-8356-334a343efd90","Type":"ContainerDied","Data":"ba5887eaae5c0f811b8fd92a2bcac6c669077ec0a6b8c1ef4b33afab99fc2359"} Jan 27 13:28:39 crc kubenswrapper[4786]: I0127 13:28:39.071193 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" event={"ID":"3f10db36-4147-4749-8356-334a343efd90","Type":"ContainerStarted","Data":"b42526c85c29c2d60145ee57cc31bf653117aeaa7d56be811e23650f20a21475"} Jan 27 13:28:40 crc kubenswrapper[4786]: I0127 13:28:40.082326 4786 generic.go:334] "Generic (PLEG): container finished" podID="3f10db36-4147-4749-8356-334a343efd90" containerID="17781ac8ac785717abb0f7d80a9f93d1101c7824c902a29469e58b5d7f06b68c" exitCode=0 Jan 27 13:28:40 crc kubenswrapper[4786]: I0127 13:28:40.082377 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" event={"ID":"3f10db36-4147-4749-8356-334a343efd90","Type":"ContainerDied","Data":"17781ac8ac785717abb0f7d80a9f93d1101c7824c902a29469e58b5d7f06b68c"} Jan 27 13:28:41 crc kubenswrapper[4786]: I0127 13:28:41.094097 4786 generic.go:334] "Generic (PLEG): container finished" podID="3f10db36-4147-4749-8356-334a343efd90" containerID="ce8da649c7a4273e86544b89bae0daee2b0d67fb8e822e69a851c2d30443a9e6" exitCode=0 Jan 27 13:28:41 crc kubenswrapper[4786]: I0127 13:28:41.094191 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" event={"ID":"3f10db36-4147-4749-8356-334a343efd90","Type":"ContainerDied","Data":"ce8da649c7a4273e86544b89bae0daee2b0d67fb8e822e69a851c2d30443a9e6"} Jan 27 13:28:42 crc kubenswrapper[4786]: I0127 13:28:42.563140 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" Jan 27 13:28:42 crc kubenswrapper[4786]: I0127 13:28:42.634768 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgmx8\" (UniqueName: \"kubernetes.io/projected/3f10db36-4147-4749-8356-334a343efd90-kube-api-access-sgmx8\") pod \"3f10db36-4147-4749-8356-334a343efd90\" (UID: \"3f10db36-4147-4749-8356-334a343efd90\") " Jan 27 13:28:42 crc kubenswrapper[4786]: I0127 13:28:42.634857 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f10db36-4147-4749-8356-334a343efd90-util\") pod \"3f10db36-4147-4749-8356-334a343efd90\" (UID: \"3f10db36-4147-4749-8356-334a343efd90\") " Jan 27 13:28:42 crc kubenswrapper[4786]: I0127 13:28:42.634948 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f10db36-4147-4749-8356-334a343efd90-bundle\") pod \"3f10db36-4147-4749-8356-334a343efd90\" (UID: \"3f10db36-4147-4749-8356-334a343efd90\") " Jan 27 13:28:42 crc kubenswrapper[4786]: I0127 13:28:42.636756 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f10db36-4147-4749-8356-334a343efd90-bundle" (OuterVolumeSpecName: "bundle") pod "3f10db36-4147-4749-8356-334a343efd90" (UID: "3f10db36-4147-4749-8356-334a343efd90"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:28:42 crc kubenswrapper[4786]: I0127 13:28:42.640168 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f10db36-4147-4749-8356-334a343efd90-kube-api-access-sgmx8" (OuterVolumeSpecName: "kube-api-access-sgmx8") pod "3f10db36-4147-4749-8356-334a343efd90" (UID: "3f10db36-4147-4749-8356-334a343efd90"). InnerVolumeSpecName "kube-api-access-sgmx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:28:42 crc kubenswrapper[4786]: I0127 13:28:42.648600 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f10db36-4147-4749-8356-334a343efd90-util" (OuterVolumeSpecName: "util") pod "3f10db36-4147-4749-8356-334a343efd90" (UID: "3f10db36-4147-4749-8356-334a343efd90"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:28:42 crc kubenswrapper[4786]: I0127 13:28:42.738408 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgmx8\" (UniqueName: \"kubernetes.io/projected/3f10db36-4147-4749-8356-334a343efd90-kube-api-access-sgmx8\") on node \"crc\" DevicePath \"\"" Jan 27 13:28:42 crc kubenswrapper[4786]: I0127 13:28:42.738459 4786 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f10db36-4147-4749-8356-334a343efd90-util\") on node \"crc\" DevicePath \"\"" Jan 27 13:28:42 crc kubenswrapper[4786]: I0127 13:28:42.738473 4786 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f10db36-4147-4749-8356-334a343efd90-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 13:28:43 crc kubenswrapper[4786]: I0127 13:28:43.263288 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" event={"ID":"3f10db36-4147-4749-8356-334a343efd90","Type":"ContainerDied","Data":"b42526c85c29c2d60145ee57cc31bf653117aeaa7d56be811e23650f20a21475"} Jan 27 13:28:43 crc kubenswrapper[4786]: I0127 13:28:43.263362 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42526c85c29c2d60145ee57cc31bf653117aeaa7d56be811e23650f20a21475" Jan 27 13:28:43 crc kubenswrapper[4786]: I0127 13:28:43.263326 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p" Jan 27 13:28:48 crc kubenswrapper[4786]: I0127 13:28:48.852452 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l"] Jan 27 13:28:48 crc kubenswrapper[4786]: E0127 13:28:48.853327 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f10db36-4147-4749-8356-334a343efd90" containerName="extract" Jan 27 13:28:48 crc kubenswrapper[4786]: I0127 13:28:48.853340 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f10db36-4147-4749-8356-334a343efd90" containerName="extract" Jan 27 13:28:48 crc kubenswrapper[4786]: E0127 13:28:48.853356 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f10db36-4147-4749-8356-334a343efd90" containerName="util" Jan 27 13:28:48 crc kubenswrapper[4786]: I0127 13:28:48.853362 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f10db36-4147-4749-8356-334a343efd90" containerName="util" Jan 27 13:28:48 crc kubenswrapper[4786]: E0127 13:28:48.853374 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f10db36-4147-4749-8356-334a343efd90" containerName="pull" Jan 27 13:28:48 crc kubenswrapper[4786]: I0127 13:28:48.853380 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f10db36-4147-4749-8356-334a343efd90" containerName="pull" Jan 27 13:28:48 crc kubenswrapper[4786]: I0127 13:28:48.853505 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f10db36-4147-4749-8356-334a343efd90" containerName="extract" Jan 27 13:28:48 crc kubenswrapper[4786]: I0127 13:28:48.853998 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l" Jan 27 13:28:48 crc kubenswrapper[4786]: I0127 13:28:48.856187 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-service-cert" Jan 27 13:28:48 crc kubenswrapper[4786]: I0127 13:28:48.857141 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cg2f9" Jan 27 13:28:48 crc kubenswrapper[4786]: I0127 13:28:48.873205 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l"] Jan 27 13:28:48 crc kubenswrapper[4786]: I0127 13:28:48.927820 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0d981b7-9481-4f08-a283-a274d47087f9-webhook-cert\") pod \"nova-operator-controller-manager-754b45c6dd-fws6l\" (UID: \"e0d981b7-9481-4f08-a283-a274d47087f9\") " pod="openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l" Jan 27 13:28:48 crc kubenswrapper[4786]: I0127 13:28:48.927998 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwnzg\" (UniqueName: \"kubernetes.io/projected/e0d981b7-9481-4f08-a283-a274d47087f9-kube-api-access-xwnzg\") pod \"nova-operator-controller-manager-754b45c6dd-fws6l\" (UID: \"e0d981b7-9481-4f08-a283-a274d47087f9\") " pod="openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l" Jan 27 13:28:48 crc kubenswrapper[4786]: I0127 13:28:48.928074 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0d981b7-9481-4f08-a283-a274d47087f9-apiservice-cert\") pod \"nova-operator-controller-manager-754b45c6dd-fws6l\" (UID: \"e0d981b7-9481-4f08-a283-a274d47087f9\") " pod="openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l" Jan 27 13:28:49 crc kubenswrapper[4786]: I0127 13:28:49.029339 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0d981b7-9481-4f08-a283-a274d47087f9-webhook-cert\") pod \"nova-operator-controller-manager-754b45c6dd-fws6l\" (UID: \"e0d981b7-9481-4f08-a283-a274d47087f9\") " pod="openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l" Jan 27 13:28:49 crc kubenswrapper[4786]: I0127 13:28:49.029761 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwnzg\" (UniqueName: \"kubernetes.io/projected/e0d981b7-9481-4f08-a283-a274d47087f9-kube-api-access-xwnzg\") pod \"nova-operator-controller-manager-754b45c6dd-fws6l\" (UID: \"e0d981b7-9481-4f08-a283-a274d47087f9\") " pod="openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l" Jan 27 13:28:49 crc kubenswrapper[4786]: I0127 13:28:49.029790 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0d981b7-9481-4f08-a283-a274d47087f9-apiservice-cert\") pod \"nova-operator-controller-manager-754b45c6dd-fws6l\" (UID: \"e0d981b7-9481-4f08-a283-a274d47087f9\") " pod="openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l" Jan 27 13:28:49 crc kubenswrapper[4786]: I0127 13:28:49.035900 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e0d981b7-9481-4f08-a283-a274d47087f9-apiservice-cert\") pod \"nova-operator-controller-manager-754b45c6dd-fws6l\" (UID: \"e0d981b7-9481-4f08-a283-a274d47087f9\") " pod="openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l" Jan 27 13:28:49 crc kubenswrapper[4786]: I0127 13:28:49.037119 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0d981b7-9481-4f08-a283-a274d47087f9-webhook-cert\") pod \"nova-operator-controller-manager-754b45c6dd-fws6l\" (UID: \"e0d981b7-9481-4f08-a283-a274d47087f9\") " pod="openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l" Jan 27 13:28:49 crc kubenswrapper[4786]: I0127 13:28:49.049335 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwnzg\" (UniqueName: \"kubernetes.io/projected/e0d981b7-9481-4f08-a283-a274d47087f9-kube-api-access-xwnzg\") pod \"nova-operator-controller-manager-754b45c6dd-fws6l\" (UID: \"e0d981b7-9481-4f08-a283-a274d47087f9\") " pod="openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l" Jan 27 13:28:49 crc kubenswrapper[4786]: I0127 13:28:49.175785 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l" Jan 27 13:28:49 crc kubenswrapper[4786]: I0127 13:28:49.657063 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l"] Jan 27 13:28:50 crc kubenswrapper[4786]: I0127 13:28:50.322425 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l" event={"ID":"e0d981b7-9481-4f08-a283-a274d47087f9","Type":"ContainerStarted","Data":"aa465e2d7a63ba9b40bf96dbd6ffac1fa49408767ee9650651a22330d009030f"} Jan 27 13:28:50 crc kubenswrapper[4786]: I0127 13:28:50.322725 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l" event={"ID":"e0d981b7-9481-4f08-a283-a274d47087f9","Type":"ContainerStarted","Data":"ef26dba5e69b5047ec39654e3502c17e6120c07ce19dbadae7b426c5de823037"} Jan 27 13:28:50 crc kubenswrapper[4786]: I0127 13:28:50.323012 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l" Jan 27 13:28:50 crc kubenswrapper[4786]: I0127 13:28:50.347482 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l" podStartSLOduration=2.347461783 podStartE2EDuration="2.347461783s" podCreationTimestamp="2026-01-27 13:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:28:50.339775062 +0000 UTC m=+1313.550389181" watchObservedRunningTime="2026-01-27 13:28:50.347461783 +0000 UTC m=+1313.558075902" Jan 27 13:28:59 crc kubenswrapper[4786]: I0127 13:28:59.183726 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-754b45c6dd-fws6l" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.451136 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-gjf4t"] Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.452503 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-gjf4t" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.462666 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-gjf4t"] Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.498748 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d51f611-5529-49e6-abf6-abb13295f7ee-operator-scripts\") pod \"nova-api-db-create-gjf4t\" (UID: \"2d51f611-5529-49e6-abf6-abb13295f7ee\") " pod="nova-kuttl-default/nova-api-db-create-gjf4t" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.499485 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpdnw\" (UniqueName: \"kubernetes.io/projected/2d51f611-5529-49e6-abf6-abb13295f7ee-kube-api-access-mpdnw\") pod \"nova-api-db-create-gjf4t\" (UID: \"2d51f611-5529-49e6-abf6-abb13295f7ee\") " pod="nova-kuttl-default/nova-api-db-create-gjf4t" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.544582 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-lc8bp"] Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.545935 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-lc8bp" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.558252 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-lc8bp"] Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.600774 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpdnw\" (UniqueName: \"kubernetes.io/projected/2d51f611-5529-49e6-abf6-abb13295f7ee-kube-api-access-mpdnw\") pod \"nova-api-db-create-gjf4t\" (UID: \"2d51f611-5529-49e6-abf6-abb13295f7ee\") " pod="nova-kuttl-default/nova-api-db-create-gjf4t" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.600836 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmw6l\" (UniqueName: \"kubernetes.io/projected/95cacbbe-65da-4223-a8e7-79f565741d0b-kube-api-access-zmw6l\") pod \"nova-cell0-db-create-lc8bp\" (UID: \"95cacbbe-65da-4223-a8e7-79f565741d0b\") " pod="nova-kuttl-default/nova-cell0-db-create-lc8bp" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.600869 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d51f611-5529-49e6-abf6-abb13295f7ee-operator-scripts\") pod \"nova-api-db-create-gjf4t\" (UID: \"2d51f611-5529-49e6-abf6-abb13295f7ee\") " pod="nova-kuttl-default/nova-api-db-create-gjf4t" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.600898 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95cacbbe-65da-4223-a8e7-79f565741d0b-operator-scripts\") pod \"nova-cell0-db-create-lc8bp\" (UID: \"95cacbbe-65da-4223-a8e7-79f565741d0b\") " pod="nova-kuttl-default/nova-cell0-db-create-lc8bp" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.601722 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d51f611-5529-49e6-abf6-abb13295f7ee-operator-scripts\") pod \"nova-api-db-create-gjf4t\" (UID: \"2d51f611-5529-49e6-abf6-abb13295f7ee\") " pod="nova-kuttl-default/nova-api-db-create-gjf4t" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.619436 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpdnw\" (UniqueName: \"kubernetes.io/projected/2d51f611-5529-49e6-abf6-abb13295f7ee-kube-api-access-mpdnw\") pod \"nova-api-db-create-gjf4t\" (UID: \"2d51f611-5529-49e6-abf6-abb13295f7ee\") " pod="nova-kuttl-default/nova-api-db-create-gjf4t" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.685957 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-4sc9k"] Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.688237 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-4sc9k" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.700134 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-dded-account-create-update-5nlwb"] Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.702966 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmw6l\" (UniqueName: \"kubernetes.io/projected/95cacbbe-65da-4223-a8e7-79f565741d0b-kube-api-access-zmw6l\") pod \"nova-cell0-db-create-lc8bp\" (UID: \"95cacbbe-65da-4223-a8e7-79f565741d0b\") " pod="nova-kuttl-default/nova-cell0-db-create-lc8bp" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.703103 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95cacbbe-65da-4223-a8e7-79f565741d0b-operator-scripts\") pod \"nova-cell0-db-create-lc8bp\" (UID: \"95cacbbe-65da-4223-a8e7-79f565741d0b\") " pod="nova-kuttl-default/nova-cell0-db-create-lc8bp" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.703778 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-dded-account-create-update-5nlwb" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.704158 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95cacbbe-65da-4223-a8e7-79f565741d0b-operator-scripts\") pod \"nova-cell0-db-create-lc8bp\" (UID: \"95cacbbe-65da-4223-a8e7-79f565741d0b\") " pod="nova-kuttl-default/nova-cell0-db-create-lc8bp" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.712264 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.718043 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-4sc9k"] Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.723382 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmw6l\" (UniqueName: \"kubernetes.io/projected/95cacbbe-65da-4223-a8e7-79f565741d0b-kube-api-access-zmw6l\") pod \"nova-cell0-db-create-lc8bp\" (UID: \"95cacbbe-65da-4223-a8e7-79f565741d0b\") " pod="nova-kuttl-default/nova-cell0-db-create-lc8bp" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.737270 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-dded-account-create-update-5nlwb"] Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.771878 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-gjf4t" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.804356 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3384e88d-d777-49d8-99ad-beef1cd493e3-operator-scripts\") pod \"nova-api-dded-account-create-update-5nlwb\" (UID: \"3384e88d-d777-49d8-99ad-beef1cd493e3\") " pod="nova-kuttl-default/nova-api-dded-account-create-update-5nlwb" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.804708 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxj6k\" (UniqueName: \"kubernetes.io/projected/3384e88d-d777-49d8-99ad-beef1cd493e3-kube-api-access-gxj6k\") pod \"nova-api-dded-account-create-update-5nlwb\" (UID: \"3384e88d-d777-49d8-99ad-beef1cd493e3\") " pod="nova-kuttl-default/nova-api-dded-account-create-update-5nlwb" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.804844 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjwwq\" (UniqueName: \"kubernetes.io/projected/cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6-kube-api-access-pjwwq\") pod \"nova-cell1-db-create-4sc9k\" (UID: \"cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6\") " pod="nova-kuttl-default/nova-cell1-db-create-4sc9k" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.805081 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6-operator-scripts\") pod \"nova-cell1-db-create-4sc9k\" (UID: \"cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6\") " pod="nova-kuttl-default/nova-cell1-db-create-4sc9k" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.863313 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-lc8bp" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.868887 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4"] Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.870224 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.873178 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.878331 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4"] Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.906726 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxj6k\" (UniqueName: \"kubernetes.io/projected/3384e88d-d777-49d8-99ad-beef1cd493e3-kube-api-access-gxj6k\") pod \"nova-api-dded-account-create-update-5nlwb\" (UID: \"3384e88d-d777-49d8-99ad-beef1cd493e3\") " pod="nova-kuttl-default/nova-api-dded-account-create-update-5nlwb" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.907110 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjwwq\" (UniqueName: \"kubernetes.io/projected/cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6-kube-api-access-pjwwq\") pod \"nova-cell1-db-create-4sc9k\" (UID: \"cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6\") " pod="nova-kuttl-default/nova-cell1-db-create-4sc9k" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.907219 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c026f19d-c330-4429-9886-c0ab82c46ae3-operator-scripts\") pod \"nova-cell0-0ba4-account-create-update-6dvg4\" (UID: \"c026f19d-c330-4429-9886-c0ab82c46ae3\") " pod="nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.907265 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6-operator-scripts\") pod \"nova-cell1-db-create-4sc9k\" (UID: \"cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6\") " pod="nova-kuttl-default/nova-cell1-db-create-4sc9k" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.907287 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvhnj\" (UniqueName: \"kubernetes.io/projected/c026f19d-c330-4429-9886-c0ab82c46ae3-kube-api-access-gvhnj\") pod \"nova-cell0-0ba4-account-create-update-6dvg4\" (UID: \"c026f19d-c330-4429-9886-c0ab82c46ae3\") " pod="nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.907334 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3384e88d-d777-49d8-99ad-beef1cd493e3-operator-scripts\") pod \"nova-api-dded-account-create-update-5nlwb\" (UID: \"3384e88d-d777-49d8-99ad-beef1cd493e3\") " pod="nova-kuttl-default/nova-api-dded-account-create-update-5nlwb" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.909008 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3384e88d-d777-49d8-99ad-beef1cd493e3-operator-scripts\") pod \"nova-api-dded-account-create-update-5nlwb\" (UID: \"3384e88d-d777-49d8-99ad-beef1cd493e3\") " pod="nova-kuttl-default/nova-api-dded-account-create-update-5nlwb" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.909060 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6-operator-scripts\") pod \"nova-cell1-db-create-4sc9k\" (UID: \"cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6\") " pod="nova-kuttl-default/nova-cell1-db-create-4sc9k" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.935809 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjwwq\" (UniqueName: \"kubernetes.io/projected/cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6-kube-api-access-pjwwq\") pod \"nova-cell1-db-create-4sc9k\" (UID: \"cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6\") " pod="nova-kuttl-default/nova-cell1-db-create-4sc9k" Jan 27 13:29:26 crc kubenswrapper[4786]: I0127 13:29:26.942184 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxj6k\" (UniqueName: \"kubernetes.io/projected/3384e88d-d777-49d8-99ad-beef1cd493e3-kube-api-access-gxj6k\") pod \"nova-api-dded-account-create-update-5nlwb\" (UID: \"3384e88d-d777-49d8-99ad-beef1cd493e3\") " pod="nova-kuttl-default/nova-api-dded-account-create-update-5nlwb" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.012050 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c026f19d-c330-4429-9886-c0ab82c46ae3-operator-scripts\") pod \"nova-cell0-0ba4-account-create-update-6dvg4\" (UID: \"c026f19d-c330-4429-9886-c0ab82c46ae3\") " pod="nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.012114 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvhnj\" (UniqueName: \"kubernetes.io/projected/c026f19d-c330-4429-9886-c0ab82c46ae3-kube-api-access-gvhnj\") pod \"nova-cell0-0ba4-account-create-update-6dvg4\" (UID: \"c026f19d-c330-4429-9886-c0ab82c46ae3\") " pod="nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.013250 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-4sc9k" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.013362 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c026f19d-c330-4429-9886-c0ab82c46ae3-operator-scripts\") pod \"nova-cell0-0ba4-account-create-update-6dvg4\" (UID: \"c026f19d-c330-4429-9886-c0ab82c46ae3\") " pod="nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.027636 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvhnj\" (UniqueName: \"kubernetes.io/projected/c026f19d-c330-4429-9886-c0ab82c46ae3-kube-api-access-gvhnj\") pod \"nova-cell0-0ba4-account-create-update-6dvg4\" (UID: \"c026f19d-c330-4429-9886-c0ab82c46ae3\") " pod="nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.064897 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz"] Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.066521 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-dded-account-create-update-5nlwb" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.067188 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.072149 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.113535 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz"] Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.114558 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f-operator-scripts\") pod \"nova-cell1-55b0-account-create-update-5rhgz\" (UID: \"768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f\") " pod="nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.114651 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnl6q\" (UniqueName: \"kubernetes.io/projected/768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f-kube-api-access-qnl6q\") pod \"nova-cell1-55b0-account-create-update-5rhgz\" (UID: \"768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f\") " pod="nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.165707 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-gjf4t"] Jan 27 13:29:27 crc kubenswrapper[4786]: W0127 13:29:27.173152 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d51f611_5529_49e6_abf6_abb13295f7ee.slice/crio-3c901cdff66b97070e83c3d415d3406a45a9da157d91c837a25c6ea6fcee6308 WatchSource:0}: Error finding container 3c901cdff66b97070e83c3d415d3406a45a9da157d91c837a25c6ea6fcee6308: Status 404 returned error can't find the container with id 3c901cdff66b97070e83c3d415d3406a45a9da157d91c837a25c6ea6fcee6308 Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.195048 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.216667 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f-operator-scripts\") pod \"nova-cell1-55b0-account-create-update-5rhgz\" (UID: \"768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f\") " pod="nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.216733 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnl6q\" (UniqueName: \"kubernetes.io/projected/768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f-kube-api-access-qnl6q\") pod \"nova-cell1-55b0-account-create-update-5rhgz\" (UID: \"768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f\") " pod="nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.218315 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f-operator-scripts\") pod \"nova-cell1-55b0-account-create-update-5rhgz\" (UID: \"768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f\") " pod="nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.237722 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnl6q\" (UniqueName: \"kubernetes.io/projected/768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f-kube-api-access-qnl6q\") pod \"nova-cell1-55b0-account-create-update-5rhgz\" (UID: \"768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f\") " pod="nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.438233 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-lc8bp"] Jan 27 13:29:27 crc kubenswrapper[4786]: W0127 13:29:27.438617 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95cacbbe_65da_4223_a8e7_79f565741d0b.slice/crio-f335d87d3dd7511c72c6bf56a9c0babfb540ce4c5df9f6d0461e60c37b864dc6 WatchSource:0}: Error finding container f335d87d3dd7511c72c6bf56a9c0babfb540ce4c5df9f6d0461e60c37b864dc6: Status 404 returned error can't find the container with id f335d87d3dd7511c72c6bf56a9c0babfb540ce4c5df9f6d0461e60c37b864dc6 Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.439766 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.568448 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-4sc9k"] Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.597973 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-lc8bp" event={"ID":"95cacbbe-65da-4223-a8e7-79f565741d0b","Type":"ContainerStarted","Data":"f335d87d3dd7511c72c6bf56a9c0babfb540ce4c5df9f6d0461e60c37b864dc6"} Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.599280 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-4sc9k" event={"ID":"cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6","Type":"ContainerStarted","Data":"936e57ec1c6683a8906156d7d86d91fa97f412445ae45150cbc337ee6b31efe6"} Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.601809 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-gjf4t" event={"ID":"2d51f611-5529-49e6-abf6-abb13295f7ee","Type":"ContainerStarted","Data":"cd6edd288c29486bdc899a5dc79ca8e7875d10a8c6557a16890ad0956988dc46"} Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.601856 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-gjf4t" event={"ID":"2d51f611-5529-49e6-abf6-abb13295f7ee","Type":"ContainerStarted","Data":"3c901cdff66b97070e83c3d415d3406a45a9da157d91c837a25c6ea6fcee6308"} Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.654840 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-api-db-create-gjf4t" podStartSLOduration=1.654820511 podStartE2EDuration="1.654820511s" podCreationTimestamp="2026-01-27 13:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:29:27.623740587 +0000 UTC m=+1350.834354706" watchObservedRunningTime="2026-01-27 13:29:27.654820511 +0000 UTC m=+1350.865434630" Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.663678 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-dded-account-create-update-5nlwb"] Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.753153 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4"] Jan 27 13:29:27 crc kubenswrapper[4786]: W0127 13:29:27.756776 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc026f19d_c330_4429_9886_c0ab82c46ae3.slice/crio-ff64c77929704596543f8b8e317cb7e2f1405819d5b0f803243389c0929fd7e1 WatchSource:0}: Error finding container ff64c77929704596543f8b8e317cb7e2f1405819d5b0f803243389c0929fd7e1: Status 404 returned error can't find the container with id ff64c77929704596543f8b8e317cb7e2f1405819d5b0f803243389c0929fd7e1 Jan 27 13:29:27 crc kubenswrapper[4786]: I0127 13:29:27.899257 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz"] Jan 27 13:29:27 crc kubenswrapper[4786]: W0127 13:29:27.909006 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod768f76f5_e1e2_4d1e_b9a0_bd6884fd9d8f.slice/crio-e9e6f08591c563138167160552cac3236349c57c18ea27d7bec8eb2646a21871 WatchSource:0}: Error finding container e9e6f08591c563138167160552cac3236349c57c18ea27d7bec8eb2646a21871: Status 404 returned error can't find the container with id e9e6f08591c563138167160552cac3236349c57c18ea27d7bec8eb2646a21871 Jan 27 13:29:28 crc kubenswrapper[4786]: I0127 13:29:28.609828 4786 generic.go:334] "Generic (PLEG): container finished" podID="95cacbbe-65da-4223-a8e7-79f565741d0b" containerID="0a67791a9ef2bfaafbd8aede20528025945211b4524995c8c16c92a89c7bd83a" exitCode=0 Jan 27 13:29:28 crc kubenswrapper[4786]: I0127 13:29:28.609939 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-lc8bp" event={"ID":"95cacbbe-65da-4223-a8e7-79f565741d0b","Type":"ContainerDied","Data":"0a67791a9ef2bfaafbd8aede20528025945211b4524995c8c16c92a89c7bd83a"} Jan 27 13:29:28 crc kubenswrapper[4786]: I0127 13:29:28.612182 4786 generic.go:334] "Generic (PLEG): container finished" podID="cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6" containerID="c9c4e00ba48fc676326785afed4dac85596d012dbdebfd5f8cc0fe3c13969aff" exitCode=0 Jan 27 13:29:28 crc kubenswrapper[4786]: I0127 13:29:28.612217 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-4sc9k" event={"ID":"cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6","Type":"ContainerDied","Data":"c9c4e00ba48fc676326785afed4dac85596d012dbdebfd5f8cc0fe3c13969aff"} Jan 27 13:29:28 crc kubenswrapper[4786]: I0127 13:29:28.613812 4786 generic.go:334] "Generic (PLEG): container finished" podID="2d51f611-5529-49e6-abf6-abb13295f7ee" containerID="cd6edd288c29486bdc899a5dc79ca8e7875d10a8c6557a16890ad0956988dc46" exitCode=0 Jan 27 13:29:28 crc kubenswrapper[4786]: I0127 13:29:28.613867 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-gjf4t" event={"ID":"2d51f611-5529-49e6-abf6-abb13295f7ee","Type":"ContainerDied","Data":"cd6edd288c29486bdc899a5dc79ca8e7875d10a8c6557a16890ad0956988dc46"} Jan 27 13:29:28 crc kubenswrapper[4786]: I0127 13:29:28.615142 4786 generic.go:334] "Generic (PLEG): container finished" podID="3384e88d-d777-49d8-99ad-beef1cd493e3" containerID="cc2ad4110ffb34287f172b11672e3cb0955c32f8af0f56fc4618864c690e918f" exitCode=0 Jan 27 13:29:28 crc kubenswrapper[4786]: I0127 13:29:28.615180 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-dded-account-create-update-5nlwb" event={"ID":"3384e88d-d777-49d8-99ad-beef1cd493e3","Type":"ContainerDied","Data":"cc2ad4110ffb34287f172b11672e3cb0955c32f8af0f56fc4618864c690e918f"} Jan 27 13:29:28 crc kubenswrapper[4786]: I0127 13:29:28.615380 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-dded-account-create-update-5nlwb" event={"ID":"3384e88d-d777-49d8-99ad-beef1cd493e3","Type":"ContainerStarted","Data":"691640e41510f534533fc2a2a3f9ddc093e76d0eaf23bf02f732652094336111"} Jan 27 13:29:28 crc kubenswrapper[4786]: I0127 13:29:28.616792 4786 generic.go:334] "Generic (PLEG): container finished" podID="768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f" containerID="28fff4ed986210774c27b56df470b552b21a7df5c245dde34411edc671a76fc0" exitCode=0 Jan 27 13:29:28 crc kubenswrapper[4786]: I0127 13:29:28.616877 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz" event={"ID":"768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f","Type":"ContainerDied","Data":"28fff4ed986210774c27b56df470b552b21a7df5c245dde34411edc671a76fc0"} Jan 27 13:29:28 crc kubenswrapper[4786]: I0127 13:29:28.616902 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz" event={"ID":"768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f","Type":"ContainerStarted","Data":"e9e6f08591c563138167160552cac3236349c57c18ea27d7bec8eb2646a21871"} Jan 27 13:29:28 crc kubenswrapper[4786]: I0127 13:29:28.618446 4786 generic.go:334] "Generic (PLEG): container finished" podID="c026f19d-c330-4429-9886-c0ab82c46ae3" containerID="f371b29f3df7547a3108a17c8c909bfcd52de5dd36a6e128b39aec6ccff71222" exitCode=0 Jan 27 13:29:28 crc kubenswrapper[4786]: I0127 13:29:28.618476 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4" event={"ID":"c026f19d-c330-4429-9886-c0ab82c46ae3","Type":"ContainerDied","Data":"f371b29f3df7547a3108a17c8c909bfcd52de5dd36a6e128b39aec6ccff71222"} Jan 27 13:29:28 crc kubenswrapper[4786]: I0127 13:29:28.618501 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4" event={"ID":"c026f19d-c330-4429-9886-c0ab82c46ae3","Type":"ContainerStarted","Data":"ff64c77929704596543f8b8e317cb7e2f1405819d5b0f803243389c0929fd7e1"} Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.040415 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-dded-account-create-update-5nlwb" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.167374 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxj6k\" (UniqueName: \"kubernetes.io/projected/3384e88d-d777-49d8-99ad-beef1cd493e3-kube-api-access-gxj6k\") pod \"3384e88d-d777-49d8-99ad-beef1cd493e3\" (UID: \"3384e88d-d777-49d8-99ad-beef1cd493e3\") " Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.167494 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3384e88d-d777-49d8-99ad-beef1cd493e3-operator-scripts\") pod \"3384e88d-d777-49d8-99ad-beef1cd493e3\" (UID: \"3384e88d-d777-49d8-99ad-beef1cd493e3\") " Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.168212 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3384e88d-d777-49d8-99ad-beef1cd493e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3384e88d-d777-49d8-99ad-beef1cd493e3" (UID: "3384e88d-d777-49d8-99ad-beef1cd493e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.177133 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3384e88d-d777-49d8-99ad-beef1cd493e3-kube-api-access-gxj6k" (OuterVolumeSpecName: "kube-api-access-gxj6k") pod "3384e88d-d777-49d8-99ad-beef1cd493e3" (UID: "3384e88d-d777-49d8-99ad-beef1cd493e3"). InnerVolumeSpecName "kube-api-access-gxj6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.270240 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3384e88d-d777-49d8-99ad-beef1cd493e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.270280 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxj6k\" (UniqueName: \"kubernetes.io/projected/3384e88d-d777-49d8-99ad-beef1cd493e3-kube-api-access-gxj6k\") on node \"crc\" DevicePath \"\"" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.424042 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.428939 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.439216 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-gjf4t" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.506618 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-lc8bp" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.507403 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-4sc9k" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.590489 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c026f19d-c330-4429-9886-c0ab82c46ae3-operator-scripts\") pod \"c026f19d-c330-4429-9886-c0ab82c46ae3\" (UID: \"c026f19d-c330-4429-9886-c0ab82c46ae3\") " Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.590651 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d51f611-5529-49e6-abf6-abb13295f7ee-operator-scripts\") pod \"2d51f611-5529-49e6-abf6-abb13295f7ee\" (UID: \"2d51f611-5529-49e6-abf6-abb13295f7ee\") " Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.590698 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnl6q\" (UniqueName: \"kubernetes.io/projected/768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f-kube-api-access-qnl6q\") pod \"768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f\" (UID: \"768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f\") " Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.590771 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvhnj\" (UniqueName: \"kubernetes.io/projected/c026f19d-c330-4429-9886-c0ab82c46ae3-kube-api-access-gvhnj\") pod \"c026f19d-c330-4429-9886-c0ab82c46ae3\" (UID: \"c026f19d-c330-4429-9886-c0ab82c46ae3\") " Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.590819 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpdnw\" (UniqueName: \"kubernetes.io/projected/2d51f611-5529-49e6-abf6-abb13295f7ee-kube-api-access-mpdnw\") pod \"2d51f611-5529-49e6-abf6-abb13295f7ee\" (UID: \"2d51f611-5529-49e6-abf6-abb13295f7ee\") " Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.590874 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f-operator-scripts\") pod \"768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f\" (UID: \"768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f\") " Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.591388 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d51f611-5529-49e6-abf6-abb13295f7ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d51f611-5529-49e6-abf6-abb13295f7ee" (UID: "2d51f611-5529-49e6-abf6-abb13295f7ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.591726 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f" (UID: "768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.592260 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c026f19d-c330-4429-9886-c0ab82c46ae3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c026f19d-c330-4429-9886-c0ab82c46ae3" (UID: "c026f19d-c330-4429-9886-c0ab82c46ae3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.596200 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f-kube-api-access-qnl6q" (OuterVolumeSpecName: "kube-api-access-qnl6q") pod "768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f" (UID: "768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f"). InnerVolumeSpecName "kube-api-access-qnl6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.596241 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c026f19d-c330-4429-9886-c0ab82c46ae3-kube-api-access-gvhnj" (OuterVolumeSpecName: "kube-api-access-gvhnj") pod "c026f19d-c330-4429-9886-c0ab82c46ae3" (UID: "c026f19d-c330-4429-9886-c0ab82c46ae3"). InnerVolumeSpecName "kube-api-access-gvhnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.608457 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d51f611-5529-49e6-abf6-abb13295f7ee-kube-api-access-mpdnw" (OuterVolumeSpecName: "kube-api-access-mpdnw") pod "2d51f611-5529-49e6-abf6-abb13295f7ee" (UID: "2d51f611-5529-49e6-abf6-abb13295f7ee"). InnerVolumeSpecName "kube-api-access-mpdnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.634915 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-dded-account-create-update-5nlwb" event={"ID":"3384e88d-d777-49d8-99ad-beef1cd493e3","Type":"ContainerDied","Data":"691640e41510f534533fc2a2a3f9ddc093e76d0eaf23bf02f732652094336111"} Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.634954 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="691640e41510f534533fc2a2a3f9ddc093e76d0eaf23bf02f732652094336111" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.635008 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-dded-account-create-update-5nlwb" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.638469 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz" event={"ID":"768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f","Type":"ContainerDied","Data":"e9e6f08591c563138167160552cac3236349c57c18ea27d7bec8eb2646a21871"} Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.638519 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9e6f08591c563138167160552cac3236349c57c18ea27d7bec8eb2646a21871" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.638479 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.639581 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4" event={"ID":"c026f19d-c330-4429-9886-c0ab82c46ae3","Type":"ContainerDied","Data":"ff64c77929704596543f8b8e317cb7e2f1405819d5b0f803243389c0929fd7e1"} Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.639668 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff64c77929704596543f8b8e317cb7e2f1405819d5b0f803243389c0929fd7e1" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.639712 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.642325 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-lc8bp" event={"ID":"95cacbbe-65da-4223-a8e7-79f565741d0b","Type":"ContainerDied","Data":"f335d87d3dd7511c72c6bf56a9c0babfb540ce4c5df9f6d0461e60c37b864dc6"} Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.642352 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f335d87d3dd7511c72c6bf56a9c0babfb540ce4c5df9f6d0461e60c37b864dc6" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.642389 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-lc8bp" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.645547 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-gjf4t" event={"ID":"2d51f611-5529-49e6-abf6-abb13295f7ee","Type":"ContainerDied","Data":"3c901cdff66b97070e83c3d415d3406a45a9da157d91c837a25c6ea6fcee6308"} Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.645573 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c901cdff66b97070e83c3d415d3406a45a9da157d91c837a25c6ea6fcee6308" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.645555 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-gjf4t" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.646731 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-4sc9k" event={"ID":"cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6","Type":"ContainerDied","Data":"936e57ec1c6683a8906156d7d86d91fa97f412445ae45150cbc337ee6b31efe6"} Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.646752 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="936e57ec1c6683a8906156d7d86d91fa97f412445ae45150cbc337ee6b31efe6" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.646794 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-4sc9k" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.692033 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjwwq\" (UniqueName: \"kubernetes.io/projected/cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6-kube-api-access-pjwwq\") pod \"cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6\" (UID: \"cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6\") " Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.692088 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmw6l\" (UniqueName: \"kubernetes.io/projected/95cacbbe-65da-4223-a8e7-79f565741d0b-kube-api-access-zmw6l\") pod \"95cacbbe-65da-4223-a8e7-79f565741d0b\" (UID: \"95cacbbe-65da-4223-a8e7-79f565741d0b\") " Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.692127 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95cacbbe-65da-4223-a8e7-79f565741d0b-operator-scripts\") pod \"95cacbbe-65da-4223-a8e7-79f565741d0b\" (UID: \"95cacbbe-65da-4223-a8e7-79f565741d0b\") " Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.692182 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6-operator-scripts\") pod \"cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6\" (UID: \"cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6\") " Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.692493 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvhnj\" (UniqueName: \"kubernetes.io/projected/c026f19d-c330-4429-9886-c0ab82c46ae3-kube-api-access-gvhnj\") on node \"crc\" DevicePath \"\"" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.692512 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpdnw\" (UniqueName: \"kubernetes.io/projected/2d51f611-5529-49e6-abf6-abb13295f7ee-kube-api-access-mpdnw\") on node \"crc\" DevicePath \"\"" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.692524 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.692535 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c026f19d-c330-4429-9886-c0ab82c46ae3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.692545 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d51f611-5529-49e6-abf6-abb13295f7ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.692555 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnl6q\" (UniqueName: \"kubernetes.io/projected/768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f-kube-api-access-qnl6q\") on node \"crc\" DevicePath \"\"" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.692909 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cacbbe-65da-4223-a8e7-79f565741d0b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95cacbbe-65da-4223-a8e7-79f565741d0b" (UID: "95cacbbe-65da-4223-a8e7-79f565741d0b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.693097 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6" (UID: "cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.694780 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6-kube-api-access-pjwwq" (OuterVolumeSpecName: "kube-api-access-pjwwq") pod "cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6" (UID: "cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6"). InnerVolumeSpecName "kube-api-access-pjwwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.695059 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95cacbbe-65da-4223-a8e7-79f565741d0b-kube-api-access-zmw6l" (OuterVolumeSpecName: "kube-api-access-zmw6l") pod "95cacbbe-65da-4223-a8e7-79f565741d0b" (UID: "95cacbbe-65da-4223-a8e7-79f565741d0b"). InnerVolumeSpecName "kube-api-access-zmw6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.793456 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjwwq\" (UniqueName: \"kubernetes.io/projected/cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6-kube-api-access-pjwwq\") on node \"crc\" DevicePath \"\"" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.793504 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmw6l\" (UniqueName: \"kubernetes.io/projected/95cacbbe-65da-4223-a8e7-79f565741d0b-kube-api-access-zmw6l\") on node \"crc\" DevicePath \"\"" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.793519 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95cacbbe-65da-4223-a8e7-79f565741d0b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:29:30 crc kubenswrapper[4786]: I0127 13:29:30.793532 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.175077 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4"] Jan 27 13:29:32 crc kubenswrapper[4786]: E0127 13:29:32.175433 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3384e88d-d777-49d8-99ad-beef1cd493e3" containerName="mariadb-account-create-update" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.175447 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3384e88d-d777-49d8-99ad-beef1cd493e3" containerName="mariadb-account-create-update" Jan 27 13:29:32 crc kubenswrapper[4786]: E0127 13:29:32.175476 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d51f611-5529-49e6-abf6-abb13295f7ee" containerName="mariadb-database-create" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.175482 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d51f611-5529-49e6-abf6-abb13295f7ee" containerName="mariadb-database-create" Jan 27 13:29:32 crc kubenswrapper[4786]: E0127 13:29:32.175511 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c026f19d-c330-4429-9886-c0ab82c46ae3" containerName="mariadb-account-create-update" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.175518 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c026f19d-c330-4429-9886-c0ab82c46ae3" containerName="mariadb-account-create-update" Jan 27 13:29:32 crc kubenswrapper[4786]: E0127 13:29:32.175529 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6" containerName="mariadb-database-create" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.175536 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6" containerName="mariadb-database-create" Jan 27 13:29:32 crc kubenswrapper[4786]: E0127 13:29:32.175554 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f" containerName="mariadb-account-create-update" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.175559 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f" containerName="mariadb-account-create-update" Jan 27 13:29:32 crc kubenswrapper[4786]: E0127 13:29:32.175575 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cacbbe-65da-4223-a8e7-79f565741d0b" containerName="mariadb-database-create" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.175582 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cacbbe-65da-4223-a8e7-79f565741d0b" containerName="mariadb-database-create" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.175754 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d51f611-5529-49e6-abf6-abb13295f7ee" containerName="mariadb-database-create" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.175772 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6" containerName="mariadb-database-create" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.175783 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="95cacbbe-65da-4223-a8e7-79f565741d0b" containerName="mariadb-database-create" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.175796 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3384e88d-d777-49d8-99ad-beef1cd493e3" containerName="mariadb-account-create-update" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.175805 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c026f19d-c330-4429-9886-c0ab82c46ae3" containerName="mariadb-account-create-update" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.175821 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f" containerName="mariadb-account-create-update" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.176384 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.178223 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.178441 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.181522 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-gxc5h" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.198831 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4"] Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.321179 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08dce0df-922f-4538-abaf-ec64509f246f-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-fwsr4\" (UID: \"08dce0df-922f-4538-abaf-ec64509f246f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.321557 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cptc4\" (UniqueName: \"kubernetes.io/projected/08dce0df-922f-4538-abaf-ec64509f246f-kube-api-access-cptc4\") pod \"nova-kuttl-cell0-conductor-db-sync-fwsr4\" (UID: \"08dce0df-922f-4538-abaf-ec64509f246f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.321660 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08dce0df-922f-4538-abaf-ec64509f246f-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-fwsr4\" (UID: \"08dce0df-922f-4538-abaf-ec64509f246f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.423293 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08dce0df-922f-4538-abaf-ec64509f246f-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-fwsr4\" (UID: \"08dce0df-922f-4538-abaf-ec64509f246f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.423363 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08dce0df-922f-4538-abaf-ec64509f246f-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-fwsr4\" (UID: \"08dce0df-922f-4538-abaf-ec64509f246f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.423411 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cptc4\" (UniqueName: \"kubernetes.io/projected/08dce0df-922f-4538-abaf-ec64509f246f-kube-api-access-cptc4\") pod \"nova-kuttl-cell0-conductor-db-sync-fwsr4\" (UID: \"08dce0df-922f-4538-abaf-ec64509f246f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.429016 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08dce0df-922f-4538-abaf-ec64509f246f-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-fwsr4\" (UID: \"08dce0df-922f-4538-abaf-ec64509f246f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.435836 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08dce0df-922f-4538-abaf-ec64509f246f-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-fwsr4\" (UID: \"08dce0df-922f-4538-abaf-ec64509f246f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.438294 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cptc4\" (UniqueName: \"kubernetes.io/projected/08dce0df-922f-4538-abaf-ec64509f246f-kube-api-access-cptc4\") pod \"nova-kuttl-cell0-conductor-db-sync-fwsr4\" (UID: \"08dce0df-922f-4538-abaf-ec64509f246f\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.496700 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" Jan 27 13:29:32 crc kubenswrapper[4786]: I0127 13:29:32.902533 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4"] Jan 27 13:29:32 crc kubenswrapper[4786]: W0127 13:29:32.906340 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08dce0df_922f_4538_abaf_ec64509f246f.slice/crio-d5bb5fcf103a288fa4e0c464a2b4d1ce393254f0b1ff3e657ac7a087158dd56e WatchSource:0}: Error finding container d5bb5fcf103a288fa4e0c464a2b4d1ce393254f0b1ff3e657ac7a087158dd56e: Status 404 returned error can't find the container with id d5bb5fcf103a288fa4e0c464a2b4d1ce393254f0b1ff3e657ac7a087158dd56e Jan 27 13:29:33 crc kubenswrapper[4786]: I0127 13:29:33.672081 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" event={"ID":"08dce0df-922f-4538-abaf-ec64509f246f","Type":"ContainerStarted","Data":"d5bb5fcf103a288fa4e0c464a2b4d1ce393254f0b1ff3e657ac7a087158dd56e"} Jan 27 13:29:39 crc kubenswrapper[4786]: I0127 13:29:39.532454 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:29:39 crc kubenswrapper[4786]: I0127 13:29:39.533780 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:29:43 crc kubenswrapper[4786]: I0127 13:29:43.751462 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" event={"ID":"08dce0df-922f-4538-abaf-ec64509f246f","Type":"ContainerStarted","Data":"4a5d88af7dc7523f178ff882c98200da2ee5c08c3b0172f9da010d5dfac91dd4"} Jan 27 13:29:43 crc kubenswrapper[4786]: I0127 13:29:43.773259 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" podStartSLOduration=2.120705752 podStartE2EDuration="11.773241974s" podCreationTimestamp="2026-01-27 13:29:32 +0000 UTC" firstStartedPulling="2026-01-27 13:29:32.908906317 +0000 UTC m=+1356.119520436" lastFinishedPulling="2026-01-27 13:29:42.561442539 +0000 UTC m=+1365.772056658" observedRunningTime="2026-01-27 13:29:43.767460175 +0000 UTC m=+1366.978074294" watchObservedRunningTime="2026-01-27 13:29:43.773241974 +0000 UTC m=+1366.983856093" Jan 27 13:29:54 crc kubenswrapper[4786]: I0127 13:29:54.830114 4786 generic.go:334] "Generic (PLEG): container finished" podID="08dce0df-922f-4538-abaf-ec64509f246f" containerID="4a5d88af7dc7523f178ff882c98200da2ee5c08c3b0172f9da010d5dfac91dd4" exitCode=0 Jan 27 13:29:54 crc kubenswrapper[4786]: I0127 13:29:54.830235 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" event={"ID":"08dce0df-922f-4538-abaf-ec64509f246f","Type":"ContainerDied","Data":"4a5d88af7dc7523f178ff882c98200da2ee5c08c3b0172f9da010d5dfac91dd4"} Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.107521 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.216947 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cptc4\" (UniqueName: \"kubernetes.io/projected/08dce0df-922f-4538-abaf-ec64509f246f-kube-api-access-cptc4\") pod \"08dce0df-922f-4538-abaf-ec64509f246f\" (UID: \"08dce0df-922f-4538-abaf-ec64509f246f\") " Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.217008 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08dce0df-922f-4538-abaf-ec64509f246f-scripts\") pod \"08dce0df-922f-4538-abaf-ec64509f246f\" (UID: \"08dce0df-922f-4538-abaf-ec64509f246f\") " Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.217037 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08dce0df-922f-4538-abaf-ec64509f246f-config-data\") pod \"08dce0df-922f-4538-abaf-ec64509f246f\" (UID: \"08dce0df-922f-4538-abaf-ec64509f246f\") " Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.223133 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08dce0df-922f-4538-abaf-ec64509f246f-scripts" (OuterVolumeSpecName: "scripts") pod "08dce0df-922f-4538-abaf-ec64509f246f" (UID: "08dce0df-922f-4538-abaf-ec64509f246f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.223212 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08dce0df-922f-4538-abaf-ec64509f246f-kube-api-access-cptc4" (OuterVolumeSpecName: "kube-api-access-cptc4") pod "08dce0df-922f-4538-abaf-ec64509f246f" (UID: "08dce0df-922f-4538-abaf-ec64509f246f"). InnerVolumeSpecName "kube-api-access-cptc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.240368 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08dce0df-922f-4538-abaf-ec64509f246f-config-data" (OuterVolumeSpecName: "config-data") pod "08dce0df-922f-4538-abaf-ec64509f246f" (UID: "08dce0df-922f-4538-abaf-ec64509f246f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.319358 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08dce0df-922f-4538-abaf-ec64509f246f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.319403 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cptc4\" (UniqueName: \"kubernetes.io/projected/08dce0df-922f-4538-abaf-ec64509f246f-kube-api-access-cptc4\") on node \"crc\" DevicePath \"\"" Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.319419 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08dce0df-922f-4538-abaf-ec64509f246f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.849219 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" event={"ID":"08dce0df-922f-4538-abaf-ec64509f246f","Type":"ContainerDied","Data":"d5bb5fcf103a288fa4e0c464a2b4d1ce393254f0b1ff3e657ac7a087158dd56e"} Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.849530 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5bb5fcf103a288fa4e0c464a2b4d1ce393254f0b1ff3e657ac7a087158dd56e" Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.849273 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4" Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.992353 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:29:56 crc kubenswrapper[4786]: E0127 13:29:56.992710 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08dce0df-922f-4538-abaf-ec64509f246f" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.992728 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="08dce0df-922f-4538-abaf-ec64509f246f" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.992889 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="08dce0df-922f-4538-abaf-ec64509f246f" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.993395 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.998950 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 27 13:29:56 crc kubenswrapper[4786]: I0127 13:29:56.999159 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-gxc5h" Jan 27 13:29:57 crc kubenswrapper[4786]: I0127 13:29:57.008684 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:29:57 crc kubenswrapper[4786]: I0127 13:29:57.144322 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh7bg\" (UniqueName: \"kubernetes.io/projected/02cab41f-e7f7-411d-a37e-4406f5c7bdc0-kube-api-access-vh7bg\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"02cab41f-e7f7-411d-a37e-4406f5c7bdc0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:29:57 crc kubenswrapper[4786]: I0127 13:29:57.144407 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02cab41f-e7f7-411d-a37e-4406f5c7bdc0-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"02cab41f-e7f7-411d-a37e-4406f5c7bdc0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:29:57 crc kubenswrapper[4786]: I0127 13:29:57.245765 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh7bg\" (UniqueName: \"kubernetes.io/projected/02cab41f-e7f7-411d-a37e-4406f5c7bdc0-kube-api-access-vh7bg\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"02cab41f-e7f7-411d-a37e-4406f5c7bdc0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:29:57 crc kubenswrapper[4786]: I0127 13:29:57.245871 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02cab41f-e7f7-411d-a37e-4406f5c7bdc0-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"02cab41f-e7f7-411d-a37e-4406f5c7bdc0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:29:57 crc kubenswrapper[4786]: I0127 13:29:57.250106 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02cab41f-e7f7-411d-a37e-4406f5c7bdc0-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"02cab41f-e7f7-411d-a37e-4406f5c7bdc0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:29:57 crc kubenswrapper[4786]: I0127 13:29:57.262009 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh7bg\" (UniqueName: \"kubernetes.io/projected/02cab41f-e7f7-411d-a37e-4406f5c7bdc0-kube-api-access-vh7bg\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"02cab41f-e7f7-411d-a37e-4406f5c7bdc0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:29:57 crc kubenswrapper[4786]: I0127 13:29:57.354389 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:29:57 crc kubenswrapper[4786]: I0127 13:29:57.748047 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:29:57 crc kubenswrapper[4786]: I0127 13:29:57.870549 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"02cab41f-e7f7-411d-a37e-4406f5c7bdc0","Type":"ContainerStarted","Data":"8dba88b2d6a862e23ae738d35e5ad5f98f12da587d93be41908fea2459215d16"} Jan 27 13:29:58 crc kubenswrapper[4786]: I0127 13:29:58.878777 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"02cab41f-e7f7-411d-a37e-4406f5c7bdc0","Type":"ContainerStarted","Data":"308beb7d926b8e84a736bac265b78e833053e6cef7f55c94ea731bc385fa1734"} Jan 27 13:29:58 crc kubenswrapper[4786]: I0127 13:29:58.879104 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:29:58 crc kubenswrapper[4786]: I0127 13:29:58.897016 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.896995787 podStartE2EDuration="2.896995787s" podCreationTimestamp="2026-01-27 13:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:29:58.893066659 +0000 UTC m=+1382.103680778" watchObservedRunningTime="2026-01-27 13:29:58.896995787 +0000 UTC m=+1382.107609906" Jan 27 13:30:00 crc kubenswrapper[4786]: I0127 13:30:00.140802 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp"] Jan 27 13:30:00 crc kubenswrapper[4786]: I0127 13:30:00.142189 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp" Jan 27 13:30:00 crc kubenswrapper[4786]: I0127 13:30:00.144757 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 13:30:00 crc kubenswrapper[4786]: I0127 13:30:00.144879 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 13:30:00 crc kubenswrapper[4786]: I0127 13:30:00.156000 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp"] Jan 27 13:30:00 crc kubenswrapper[4786]: I0127 13:30:00.190232 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdmxx\" (UniqueName: \"kubernetes.io/projected/44fae073-322b-4d2f-9fd5-aae69cfd6aac-kube-api-access-rdmxx\") pod \"collect-profiles-29492010-46pkp\" (UID: \"44fae073-322b-4d2f-9fd5-aae69cfd6aac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp" Jan 27 13:30:00 crc kubenswrapper[4786]: I0127 13:30:00.190382 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44fae073-322b-4d2f-9fd5-aae69cfd6aac-config-volume\") pod \"collect-profiles-29492010-46pkp\" (UID: \"44fae073-322b-4d2f-9fd5-aae69cfd6aac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp" Jan 27 13:30:00 crc kubenswrapper[4786]: I0127 13:30:00.190416 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44fae073-322b-4d2f-9fd5-aae69cfd6aac-secret-volume\") pod \"collect-profiles-29492010-46pkp\" (UID: \"44fae073-322b-4d2f-9fd5-aae69cfd6aac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp" Jan 27 13:30:00 crc kubenswrapper[4786]: I0127 13:30:00.291429 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdmxx\" (UniqueName: \"kubernetes.io/projected/44fae073-322b-4d2f-9fd5-aae69cfd6aac-kube-api-access-rdmxx\") pod \"collect-profiles-29492010-46pkp\" (UID: \"44fae073-322b-4d2f-9fd5-aae69cfd6aac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp" Jan 27 13:30:00 crc kubenswrapper[4786]: I0127 13:30:00.291553 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44fae073-322b-4d2f-9fd5-aae69cfd6aac-config-volume\") pod \"collect-profiles-29492010-46pkp\" (UID: \"44fae073-322b-4d2f-9fd5-aae69cfd6aac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp" Jan 27 13:30:00 crc kubenswrapper[4786]: I0127 13:30:00.291580 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44fae073-322b-4d2f-9fd5-aae69cfd6aac-secret-volume\") pod \"collect-profiles-29492010-46pkp\" (UID: \"44fae073-322b-4d2f-9fd5-aae69cfd6aac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp" Jan 27 13:30:00 crc kubenswrapper[4786]: I0127 13:30:00.293426 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44fae073-322b-4d2f-9fd5-aae69cfd6aac-config-volume\") pod \"collect-profiles-29492010-46pkp\" (UID: \"44fae073-322b-4d2f-9fd5-aae69cfd6aac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp" Jan 27 13:30:00 crc kubenswrapper[4786]: I0127 13:30:00.300946 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44fae073-322b-4d2f-9fd5-aae69cfd6aac-secret-volume\") pod \"collect-profiles-29492010-46pkp\" (UID: \"44fae073-322b-4d2f-9fd5-aae69cfd6aac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp" Jan 27 13:30:00 crc kubenswrapper[4786]: I0127 13:30:00.324471 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdmxx\" (UniqueName: \"kubernetes.io/projected/44fae073-322b-4d2f-9fd5-aae69cfd6aac-kube-api-access-rdmxx\") pod \"collect-profiles-29492010-46pkp\" (UID: \"44fae073-322b-4d2f-9fd5-aae69cfd6aac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp" Jan 27 13:30:00 crc kubenswrapper[4786]: I0127 13:30:00.463426 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp" Jan 27 13:30:00 crc kubenswrapper[4786]: I0127 13:30:00.877886 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp"] Jan 27 13:30:00 crc kubenswrapper[4786]: W0127 13:30:00.888776 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44fae073_322b_4d2f_9fd5_aae69cfd6aac.slice/crio-f88c51f14139fb1a4a749cfa8728f58c3212daf24e29d79756afa085e6743682 WatchSource:0}: Error finding container f88c51f14139fb1a4a749cfa8728f58c3212daf24e29d79756afa085e6743682: Status 404 returned error can't find the container with id f88c51f14139fb1a4a749cfa8728f58c3212daf24e29d79756afa085e6743682 Jan 27 13:30:01 crc kubenswrapper[4786]: I0127 13:30:01.903745 4786 generic.go:334] "Generic (PLEG): container finished" podID="44fae073-322b-4d2f-9fd5-aae69cfd6aac" containerID="2755be91697192cd2b1404af19948048e5b4adced7e15a28b7a956d51b695a9c" exitCode=0 Jan 27 13:30:01 crc kubenswrapper[4786]: I0127 13:30:01.903851 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp" event={"ID":"44fae073-322b-4d2f-9fd5-aae69cfd6aac","Type":"ContainerDied","Data":"2755be91697192cd2b1404af19948048e5b4adced7e15a28b7a956d51b695a9c"} Jan 27 13:30:01 crc kubenswrapper[4786]: I0127 13:30:01.904267 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp" event={"ID":"44fae073-322b-4d2f-9fd5-aae69cfd6aac","Type":"ContainerStarted","Data":"f88c51f14139fb1a4a749cfa8728f58c3212daf24e29d79756afa085e6743682"} Jan 27 13:30:02 crc kubenswrapper[4786]: I0127 13:30:02.378127 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:30:02 crc kubenswrapper[4786]: I0127 13:30:02.745319 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6kksk"] Jan 27 13:30:02 crc kubenswrapper[4786]: I0127 13:30:02.747312 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:02 crc kubenswrapper[4786]: I0127 13:30:02.754520 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6kksk"] Jan 27 13:30:02 crc kubenswrapper[4786]: I0127 13:30:02.809967 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65"] Jan 27 13:30:02 crc kubenswrapper[4786]: I0127 13:30:02.811217 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" Jan 27 13:30:02 crc kubenswrapper[4786]: I0127 13:30:02.815672 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 27 13:30:02 crc kubenswrapper[4786]: I0127 13:30:02.815744 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 27 13:30:02 crc kubenswrapper[4786]: I0127 13:30:02.826185 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65"] Jan 27 13:30:02 crc kubenswrapper[4786]: I0127 13:30:02.930228 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d75d515-6418-4134-84d6-ed12a8d75de8-catalog-content\") pod \"redhat-operators-6kksk\" (UID: \"3d75d515-6418-4134-84d6-ed12a8d75de8\") " pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:02 crc kubenswrapper[4786]: I0127 13:30:02.930288 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d75d515-6418-4134-84d6-ed12a8d75de8-utilities\") pod \"redhat-operators-6kksk\" (UID: \"3d75d515-6418-4134-84d6-ed12a8d75de8\") " pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:02 crc kubenswrapper[4786]: I0127 13:30:02.930317 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc35b06a-c49a-45a3-8bac-626ad8256ad2-config-data\") pod \"nova-kuttl-cell0-cell-mapping-4nv65\" (UID: \"fc35b06a-c49a-45a3-8bac-626ad8256ad2\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" Jan 27 13:30:02 crc kubenswrapper[4786]: I0127 13:30:02.930347 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc35b06a-c49a-45a3-8bac-626ad8256ad2-scripts\") pod \"nova-kuttl-cell0-cell-mapping-4nv65\" (UID: \"fc35b06a-c49a-45a3-8bac-626ad8256ad2\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" Jan 27 13:30:02 crc kubenswrapper[4786]: I0127 13:30:02.930369 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l77m4\" (UniqueName: \"kubernetes.io/projected/fc35b06a-c49a-45a3-8bac-626ad8256ad2-kube-api-access-l77m4\") pod \"nova-kuttl-cell0-cell-mapping-4nv65\" (UID: \"fc35b06a-c49a-45a3-8bac-626ad8256ad2\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" Jan 27 13:30:02 crc kubenswrapper[4786]: I0127 13:30:02.930534 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6wf9\" (UniqueName: \"kubernetes.io/projected/3d75d515-6418-4134-84d6-ed12a8d75de8-kube-api-access-g6wf9\") pod \"redhat-operators-6kksk\" (UID: \"3d75d515-6418-4134-84d6-ed12a8d75de8\") " pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.024300 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.025643 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.029557 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.031307 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l77m4\" (UniqueName: \"kubernetes.io/projected/fc35b06a-c49a-45a3-8bac-626ad8256ad2-kube-api-access-l77m4\") pod \"nova-kuttl-cell0-cell-mapping-4nv65\" (UID: \"fc35b06a-c49a-45a3-8bac-626ad8256ad2\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.031365 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6wf9\" (UniqueName: \"kubernetes.io/projected/3d75d515-6418-4134-84d6-ed12a8d75de8-kube-api-access-g6wf9\") pod \"redhat-operators-6kksk\" (UID: \"3d75d515-6418-4134-84d6-ed12a8d75de8\") " pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.031427 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d75d515-6418-4134-84d6-ed12a8d75de8-catalog-content\") pod \"redhat-operators-6kksk\" (UID: \"3d75d515-6418-4134-84d6-ed12a8d75de8\") " pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.031464 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d75d515-6418-4134-84d6-ed12a8d75de8-utilities\") pod \"redhat-operators-6kksk\" (UID: \"3d75d515-6418-4134-84d6-ed12a8d75de8\") " pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.031497 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc35b06a-c49a-45a3-8bac-626ad8256ad2-config-data\") pod \"nova-kuttl-cell0-cell-mapping-4nv65\" (UID: \"fc35b06a-c49a-45a3-8bac-626ad8256ad2\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.031535 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc35b06a-c49a-45a3-8bac-626ad8256ad2-scripts\") pod \"nova-kuttl-cell0-cell-mapping-4nv65\" (UID: \"fc35b06a-c49a-45a3-8bac-626ad8256ad2\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.032080 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d75d515-6418-4134-84d6-ed12a8d75de8-catalog-content\") pod \"redhat-operators-6kksk\" (UID: \"3d75d515-6418-4134-84d6-ed12a8d75de8\") " pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.032393 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d75d515-6418-4134-84d6-ed12a8d75de8-utilities\") pod \"redhat-operators-6kksk\" (UID: \"3d75d515-6418-4134-84d6-ed12a8d75de8\") " pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.033313 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.041540 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc35b06a-c49a-45a3-8bac-626ad8256ad2-config-data\") pod \"nova-kuttl-cell0-cell-mapping-4nv65\" (UID: \"fc35b06a-c49a-45a3-8bac-626ad8256ad2\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.048313 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc35b06a-c49a-45a3-8bac-626ad8256ad2-scripts\") pod \"nova-kuttl-cell0-cell-mapping-4nv65\" (UID: \"fc35b06a-c49a-45a3-8bac-626ad8256ad2\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.052814 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l77m4\" (UniqueName: \"kubernetes.io/projected/fc35b06a-c49a-45a3-8bac-626ad8256ad2-kube-api-access-l77m4\") pod \"nova-kuttl-cell0-cell-mapping-4nv65\" (UID: \"fc35b06a-c49a-45a3-8bac-626ad8256ad2\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.064857 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6wf9\" (UniqueName: \"kubernetes.io/projected/3d75d515-6418-4134-84d6-ed12a8d75de8-kube-api-access-g6wf9\") pod \"redhat-operators-6kksk\" (UID: \"3d75d515-6418-4134-84d6-ed12a8d75de8\") " pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.069249 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.115427 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.116858 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.119984 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.127094 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.132866 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.143352 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-config-data\") pod \"nova-kuttl-api-0\" (UID: \"e8c9ecfc-d37c-4c95-9a5f-d12268c17204\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.143409 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhhsd\" (UniqueName: \"kubernetes.io/projected/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-kube-api-access-mhhsd\") pod \"nova-kuttl-api-0\" (UID: \"e8c9ecfc-d37c-4c95-9a5f-d12268c17204\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.143460 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-logs\") pod \"nova-kuttl-api-0\" (UID: \"e8c9ecfc-d37c-4c95-9a5f-d12268c17204\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.211805 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.212937 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.216615 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.243349 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.244520 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-config-data\") pod \"nova-kuttl-api-0\" (UID: \"e8c9ecfc-d37c-4c95-9a5f-d12268c17204\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.244585 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfzff\" (UniqueName: \"kubernetes.io/projected/8f166195-a387-40f9-b587-3e8e5c3afcf9-kube-api-access-nfzff\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"8f166195-a387-40f9-b587-3e8e5c3afcf9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.244642 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhhsd\" (UniqueName: \"kubernetes.io/projected/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-kube-api-access-mhhsd\") pod \"nova-kuttl-api-0\" (UID: \"e8c9ecfc-d37c-4c95-9a5f-d12268c17204\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.246845 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f166195-a387-40f9-b587-3e8e5c3afcf9-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"8f166195-a387-40f9-b587-3e8e5c3afcf9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.246916 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-logs\") pod \"nova-kuttl-api-0\" (UID: \"e8c9ecfc-d37c-4c95-9a5f-d12268c17204\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.247707 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-logs\") pod \"nova-kuttl-api-0\" (UID: \"e8c9ecfc-d37c-4c95-9a5f-d12268c17204\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.263191 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-config-data\") pod \"nova-kuttl-api-0\" (UID: \"e8c9ecfc-d37c-4c95-9a5f-d12268c17204\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.271177 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhhsd\" (UniqueName: \"kubernetes.io/projected/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-kube-api-access-mhhsd\") pod \"nova-kuttl-api-0\" (UID: \"e8c9ecfc-d37c-4c95-9a5f-d12268c17204\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.279585 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.281236 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.285318 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.309362 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.350822 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a29346-eda2-48ed-9f24-e31d93aa0950-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"59a29346-eda2-48ed-9f24-e31d93aa0950\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.350880 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2kjh\" (UniqueName: \"kubernetes.io/projected/59a29346-eda2-48ed-9f24-e31d93aa0950-kube-api-access-g2kjh\") pod \"nova-kuttl-scheduler-0\" (UID: \"59a29346-eda2-48ed-9f24-e31d93aa0950\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.350904 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f166195-a387-40f9-b587-3e8e5c3afcf9-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"8f166195-a387-40f9-b587-3e8e5c3afcf9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.351004 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfzff\" (UniqueName: \"kubernetes.io/projected/8f166195-a387-40f9-b587-3e8e5c3afcf9-kube-api-access-nfzff\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"8f166195-a387-40f9-b587-3e8e5c3afcf9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.357113 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f166195-a387-40f9-b587-3e8e5c3afcf9-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"8f166195-a387-40f9-b587-3e8e5c3afcf9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.376452 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfzff\" (UniqueName: \"kubernetes.io/projected/8f166195-a387-40f9-b587-3e8e5c3afcf9-kube-api-access-nfzff\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"8f166195-a387-40f9-b587-3e8e5c3afcf9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.448341 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.452962 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a29346-eda2-48ed-9f24-e31d93aa0950-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"59a29346-eda2-48ed-9f24-e31d93aa0950\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.453041 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2kjh\" (UniqueName: \"kubernetes.io/projected/59a29346-eda2-48ed-9f24-e31d93aa0950-kube-api-access-g2kjh\") pod \"nova-kuttl-scheduler-0\" (UID: \"59a29346-eda2-48ed-9f24-e31d93aa0950\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.453150 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"e8b7d7ee-1a81-4742-9a11-0461b89bbce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.453190 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"e8b7d7ee-1a81-4742-9a11-0461b89bbce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.453229 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtvjn\" (UniqueName: \"kubernetes.io/projected/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-kube-api-access-xtvjn\") pod \"nova-kuttl-metadata-0\" (UID: \"e8b7d7ee-1a81-4742-9a11-0461b89bbce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.453577 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.462461 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a29346-eda2-48ed-9f24-e31d93aa0950-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"59a29346-eda2-48ed-9f24-e31d93aa0950\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.465416 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.484145 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2kjh\" (UniqueName: \"kubernetes.io/projected/59a29346-eda2-48ed-9f24-e31d93aa0950-kube-api-access-g2kjh\") pod \"nova-kuttl-scheduler-0\" (UID: \"59a29346-eda2-48ed-9f24-e31d93aa0950\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.554368 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdmxx\" (UniqueName: \"kubernetes.io/projected/44fae073-322b-4d2f-9fd5-aae69cfd6aac-kube-api-access-rdmxx\") pod \"44fae073-322b-4d2f-9fd5-aae69cfd6aac\" (UID: \"44fae073-322b-4d2f-9fd5-aae69cfd6aac\") " Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.554836 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44fae073-322b-4d2f-9fd5-aae69cfd6aac-config-volume\") pod \"44fae073-322b-4d2f-9fd5-aae69cfd6aac\" (UID: \"44fae073-322b-4d2f-9fd5-aae69cfd6aac\") " Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.555063 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44fae073-322b-4d2f-9fd5-aae69cfd6aac-secret-volume\") pod \"44fae073-322b-4d2f-9fd5-aae69cfd6aac\" (UID: \"44fae073-322b-4d2f-9fd5-aae69cfd6aac\") " Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.555420 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44fae073-322b-4d2f-9fd5-aae69cfd6aac-config-volume" (OuterVolumeSpecName: "config-volume") pod "44fae073-322b-4d2f-9fd5-aae69cfd6aac" (UID: "44fae073-322b-4d2f-9fd5-aae69cfd6aac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.558139 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44fae073-322b-4d2f-9fd5-aae69cfd6aac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "44fae073-322b-4d2f-9fd5-aae69cfd6aac" (UID: "44fae073-322b-4d2f-9fd5-aae69cfd6aac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.558580 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"e8b7d7ee-1a81-4742-9a11-0461b89bbce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.558646 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtvjn\" (UniqueName: \"kubernetes.io/projected/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-kube-api-access-xtvjn\") pod \"nova-kuttl-metadata-0\" (UID: \"e8b7d7ee-1a81-4742-9a11-0461b89bbce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.559070 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"e8b7d7ee-1a81-4742-9a11-0461b89bbce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.559725 4786 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44fae073-322b-4d2f-9fd5-aae69cfd6aac-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.560368 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"e8b7d7ee-1a81-4742-9a11-0461b89bbce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.561260 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44fae073-322b-4d2f-9fd5-aae69cfd6aac-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.561702 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44fae073-322b-4d2f-9fd5-aae69cfd6aac-kube-api-access-rdmxx" (OuterVolumeSpecName: "kube-api-access-rdmxx") pod "44fae073-322b-4d2f-9fd5-aae69cfd6aac" (UID: "44fae073-322b-4d2f-9fd5-aae69cfd6aac"). InnerVolumeSpecName "kube-api-access-rdmxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.575339 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"e8b7d7ee-1a81-4742-9a11-0461b89bbce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.582570 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtvjn\" (UniqueName: \"kubernetes.io/projected/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-kube-api-access-xtvjn\") pod \"nova-kuttl-metadata-0\" (UID: \"e8b7d7ee-1a81-4742-9a11-0461b89bbce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.633191 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.638035 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.670183 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdmxx\" (UniqueName: \"kubernetes.io/projected/44fae073-322b-4d2f-9fd5-aae69cfd6aac-kube-api-access-rdmxx\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.774455 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6kksk"] Jan 27 13:30:03 crc kubenswrapper[4786]: W0127 13:30:03.827856 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d75d515_6418_4134_84d6_ed12a8d75de8.slice/crio-dc168da1e078d7419a6477a85b690868805b6a15eccd01426782ea363a2bc39e WatchSource:0}: Error finding container dc168da1e078d7419a6477a85b690868805b6a15eccd01426782ea363a2bc39e: Status 404 returned error can't find the container with id dc168da1e078d7419a6477a85b690868805b6a15eccd01426782ea363a2bc39e Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.873273 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65"] Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.932059 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" event={"ID":"fc35b06a-c49a-45a3-8bac-626ad8256ad2","Type":"ContainerStarted","Data":"d3189b6b164cc789894c939957365e6f97c6d8e0f6e023b70e1db61519cb6e89"} Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.934893 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp" event={"ID":"44fae073-322b-4d2f-9fd5-aae69cfd6aac","Type":"ContainerDied","Data":"f88c51f14139fb1a4a749cfa8728f58c3212daf24e29d79756afa085e6743682"} Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.934933 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f88c51f14139fb1a4a749cfa8728f58c3212daf24e29d79756afa085e6743682" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.935003 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492010-46pkp" Jan 27 13:30:03 crc kubenswrapper[4786]: I0127 13:30:03.969643 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kksk" event={"ID":"3d75d515-6418-4134-84d6-ed12a8d75de8","Type":"ContainerStarted","Data":"dc168da1e078d7419a6477a85b690868805b6a15eccd01426782ea363a2bc39e"} Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.002063 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5"] Jan 27 13:30:04 crc kubenswrapper[4786]: E0127 13:30:04.002937 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44fae073-322b-4d2f-9fd5-aae69cfd6aac" containerName="collect-profiles" Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.002954 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="44fae073-322b-4d2f-9fd5-aae69cfd6aac" containerName="collect-profiles" Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.004066 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="44fae073-322b-4d2f-9fd5-aae69cfd6aac" containerName="collect-profiles" Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.006040 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.013473 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.014207 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.020679 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5"] Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.088217 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:30:04 crc kubenswrapper[4786]: W0127 13:30:04.093809 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f166195_a387_40f9_b587_3e8e5c3afcf9.slice/crio-87266f589ad18e20a0b9f1a17950861bed51d58ebaefcec5c2f6382898661579 WatchSource:0}: Error finding container 87266f589ad18e20a0b9f1a17950861bed51d58ebaefcec5c2f6382898661579: Status 404 returned error can't find the container with id 87266f589ad18e20a0b9f1a17950861bed51d58ebaefcec5c2f6382898661579 Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.100137 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.113709 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.181225 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d22bccf-b474-465b-8a31-d5b965a5448a-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-d54j5\" (UID: \"2d22bccf-b474-465b-8a31-d5b965a5448a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.181283 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8n9w\" (UniqueName: \"kubernetes.io/projected/2d22bccf-b474-465b-8a31-d5b965a5448a-kube-api-access-x8n9w\") pod \"nova-kuttl-cell1-conductor-db-sync-d54j5\" (UID: \"2d22bccf-b474-465b-8a31-d5b965a5448a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.181334 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d22bccf-b474-465b-8a31-d5b965a5448a-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-d54j5\" (UID: \"2d22bccf-b474-465b-8a31-d5b965a5448a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.251525 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.282764 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d22bccf-b474-465b-8a31-d5b965a5448a-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-d54j5\" (UID: \"2d22bccf-b474-465b-8a31-d5b965a5448a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.283565 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8n9w\" (UniqueName: \"kubernetes.io/projected/2d22bccf-b474-465b-8a31-d5b965a5448a-kube-api-access-x8n9w\") pod \"nova-kuttl-cell1-conductor-db-sync-d54j5\" (UID: \"2d22bccf-b474-465b-8a31-d5b965a5448a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.283638 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d22bccf-b474-465b-8a31-d5b965a5448a-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-d54j5\" (UID: \"2d22bccf-b474-465b-8a31-d5b965a5448a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.289528 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d22bccf-b474-465b-8a31-d5b965a5448a-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-d54j5\" (UID: \"2d22bccf-b474-465b-8a31-d5b965a5448a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.294131 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d22bccf-b474-465b-8a31-d5b965a5448a-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-d54j5\" (UID: \"2d22bccf-b474-465b-8a31-d5b965a5448a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.302723 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8n9w\" (UniqueName: \"kubernetes.io/projected/2d22bccf-b474-465b-8a31-d5b965a5448a-kube-api-access-x8n9w\") pod \"nova-kuttl-cell1-conductor-db-sync-d54j5\" (UID: \"2d22bccf-b474-465b-8a31-d5b965a5448a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.352809 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.354676 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" Jan 27 13:30:04 crc kubenswrapper[4786]: W0127 13:30:04.356388 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59a29346_eda2_48ed_9f24_e31d93aa0950.slice/crio-f454bf8a8f0745f26b8b26bfaa0d1b69a841d6e9485f272fc619ee19b9dd464b WatchSource:0}: Error finding container f454bf8a8f0745f26b8b26bfaa0d1b69a841d6e9485f272fc619ee19b9dd464b: Status 404 returned error can't find the container with id f454bf8a8f0745f26b8b26bfaa0d1b69a841d6e9485f272fc619ee19b9dd464b Jan 27 13:30:04 crc kubenswrapper[4786]: I0127 13:30:04.827817 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5"] Jan 27 13:30:05 crc kubenswrapper[4786]: I0127 13:30:05.000060 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"59a29346-eda2-48ed-9f24-e31d93aa0950","Type":"ContainerStarted","Data":"f454bf8a8f0745f26b8b26bfaa0d1b69a841d6e9485f272fc619ee19b9dd464b"} Jan 27 13:30:05 crc kubenswrapper[4786]: I0127 13:30:05.003022 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" event={"ID":"fc35b06a-c49a-45a3-8bac-626ad8256ad2","Type":"ContainerStarted","Data":"ff55188d60a092bcd324496b453bf4b7dd890b3114ee21c84c242a9731df73ab"} Jan 27 13:30:05 crc kubenswrapper[4786]: I0127 13:30:05.006104 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" event={"ID":"2d22bccf-b474-465b-8a31-d5b965a5448a","Type":"ContainerStarted","Data":"cd22b5c3c277d1378a53cea6a20f44b8845f9c212f9b0231d87a1c6ced4350c4"} Jan 27 13:30:05 crc kubenswrapper[4786]: I0127 13:30:05.007773 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"8f166195-a387-40f9-b587-3e8e5c3afcf9","Type":"ContainerStarted","Data":"87266f589ad18e20a0b9f1a17950861bed51d58ebaefcec5c2f6382898661579"} Jan 27 13:30:05 crc kubenswrapper[4786]: I0127 13:30:05.009515 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e8c9ecfc-d37c-4c95-9a5f-d12268c17204","Type":"ContainerStarted","Data":"a9cb5f75bfbf57b8b7db26610b94cf04ac0f607bf93d1138682756ff9c6af707"} Jan 27 13:30:05 crc kubenswrapper[4786]: I0127 13:30:05.023410 4786 generic.go:334] "Generic (PLEG): container finished" podID="3d75d515-6418-4134-84d6-ed12a8d75de8" containerID="2684a5fe3ad9e6feb668b253a5391a7bfe47e7cf1c293928ec37fbbb7c9f95e1" exitCode=0 Jan 27 13:30:05 crc kubenswrapper[4786]: I0127 13:30:05.023849 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kksk" event={"ID":"3d75d515-6418-4134-84d6-ed12a8d75de8","Type":"ContainerDied","Data":"2684a5fe3ad9e6feb668b253a5391a7bfe47e7cf1c293928ec37fbbb7c9f95e1"} Jan 27 13:30:05 crc kubenswrapper[4786]: I0127 13:30:05.027266 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" podStartSLOduration=3.027244892 podStartE2EDuration="3.027244892s" podCreationTimestamp="2026-01-27 13:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:30:05.022122792 +0000 UTC m=+1388.232736911" watchObservedRunningTime="2026-01-27 13:30:05.027244892 +0000 UTC m=+1388.237859011" Jan 27 13:30:05 crc kubenswrapper[4786]: I0127 13:30:05.027713 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e8b7d7ee-1a81-4742-9a11-0461b89bbce3","Type":"ContainerStarted","Data":"9ad56ebab6e4624a7cdc75bf32da83414fb8e72323ebed6bdfd90108cc658cbe"} Jan 27 13:30:05 crc kubenswrapper[4786]: I0127 13:30:05.038754 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" podStartSLOduration=2.038738377 podStartE2EDuration="2.038738377s" podCreationTimestamp="2026-01-27 13:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:30:05.035326914 +0000 UTC m=+1388.245941033" watchObservedRunningTime="2026-01-27 13:30:05.038738377 +0000 UTC m=+1388.249352496" Jan 27 13:30:06 crc kubenswrapper[4786]: I0127 13:30:06.049164 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" event={"ID":"2d22bccf-b474-465b-8a31-d5b965a5448a","Type":"ContainerStarted","Data":"008416ea6a7b9aeb70f9e6f50e083b0dabb18aaccfd570c460f0464be175c06f"} Jan 27 13:30:08 crc kubenswrapper[4786]: I0127 13:30:08.080824 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"59a29346-eda2-48ed-9f24-e31d93aa0950","Type":"ContainerStarted","Data":"7f4fd365aecba422b826b1c3b354936e6344acf4208412fc44cec1fedf4d740e"} Jan 27 13:30:08 crc kubenswrapper[4786]: I0127 13:30:08.094248 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"8f166195-a387-40f9-b587-3e8e5c3afcf9","Type":"ContainerStarted","Data":"360d2cc0c05c30a9fdead1af09fd2365e02cdecf4bfb9c61ddadbdf1153020c8"} Jan 27 13:30:08 crc kubenswrapper[4786]: I0127 13:30:08.098947 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e8c9ecfc-d37c-4c95-9a5f-d12268c17204","Type":"ContainerStarted","Data":"1603621613cd078a34ac82295be36240850353e8683ca23e1ab8b408a2746674"} Jan 27 13:30:08 crc kubenswrapper[4786]: I0127 13:30:08.099012 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e8c9ecfc-d37c-4c95-9a5f-d12268c17204","Type":"ContainerStarted","Data":"f87b04610ed9ed08e40777e3656f7fccca82d935619ecaea2b476415287fcf49"} Jan 27 13:30:08 crc kubenswrapper[4786]: I0127 13:30:08.101590 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kksk" event={"ID":"3d75d515-6418-4134-84d6-ed12a8d75de8","Type":"ContainerStarted","Data":"75c3f54c283baeb08a22790c8ea493cee72c13a638093f699680e8a40b37cc1c"} Jan 27 13:30:08 crc kubenswrapper[4786]: I0127 13:30:08.106572 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e8b7d7ee-1a81-4742-9a11-0461b89bbce3","Type":"ContainerStarted","Data":"5b032c1fc07426f543341dd91b1db2a43379ce4c8d10b62a7ceae8fa170cea9c"} Jan 27 13:30:08 crc kubenswrapper[4786]: I0127 13:30:08.106674 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e8b7d7ee-1a81-4742-9a11-0461b89bbce3","Type":"ContainerStarted","Data":"94522cebe735221313a9d5a8c2cbff7cb319c47c059557c38b7c44cad12c11ec"} Jan 27 13:30:08 crc kubenswrapper[4786]: I0127 13:30:08.108313 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.286918165 podStartE2EDuration="5.108296741s" podCreationTimestamp="2026-01-27 13:30:03 +0000 UTC" firstStartedPulling="2026-01-27 13:30:04.365566907 +0000 UTC m=+1387.576181026" lastFinishedPulling="2026-01-27 13:30:07.186945483 +0000 UTC m=+1390.397559602" observedRunningTime="2026-01-27 13:30:08.104250761 +0000 UTC m=+1391.314864970" watchObservedRunningTime="2026-01-27 13:30:08.108296741 +0000 UTC m=+1391.318910860" Jan 27 13:30:08 crc kubenswrapper[4786]: I0127 13:30:08.138411 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=3.032104886 podStartE2EDuration="6.138378616s" podCreationTimestamp="2026-01-27 13:30:02 +0000 UTC" firstStartedPulling="2026-01-27 13:30:04.131176726 +0000 UTC m=+1387.341790845" lastFinishedPulling="2026-01-27 13:30:07.237450456 +0000 UTC m=+1390.448064575" observedRunningTime="2026-01-27 13:30:08.137909873 +0000 UTC m=+1391.348523992" watchObservedRunningTime="2026-01-27 13:30:08.138378616 +0000 UTC m=+1391.348992735" Jan 27 13:30:08 crc kubenswrapper[4786]: I0127 13:30:08.179631 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.165396296 podStartE2EDuration="5.179594884s" podCreationTimestamp="2026-01-27 13:30:03 +0000 UTC" firstStartedPulling="2026-01-27 13:30:04.09991437 +0000 UTC m=+1387.310528489" lastFinishedPulling="2026-01-27 13:30:07.114112958 +0000 UTC m=+1390.324727077" observedRunningTime="2026-01-27 13:30:08.170797733 +0000 UTC m=+1391.381411852" watchObservedRunningTime="2026-01-27 13:30:08.179594884 +0000 UTC m=+1391.390209013" Jan 27 13:30:08 crc kubenswrapper[4786]: I0127 13:30:08.190830 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.248023899 podStartE2EDuration="5.190814041s" podCreationTimestamp="2026-01-27 13:30:03 +0000 UTC" firstStartedPulling="2026-01-27 13:30:04.255584944 +0000 UTC m=+1387.466199053" lastFinishedPulling="2026-01-27 13:30:07.198375076 +0000 UTC m=+1390.408989195" observedRunningTime="2026-01-27 13:30:08.186189145 +0000 UTC m=+1391.396803264" watchObservedRunningTime="2026-01-27 13:30:08.190814041 +0000 UTC m=+1391.401428160" Jan 27 13:30:08 crc kubenswrapper[4786]: I0127 13:30:08.465573 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:30:08 crc kubenswrapper[4786]: I0127 13:30:08.633823 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:08 crc kubenswrapper[4786]: I0127 13:30:08.638972 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:08 crc kubenswrapper[4786]: I0127 13:30:08.639506 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:09 crc kubenswrapper[4786]: I0127 13:30:09.532506 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:30:09 crc kubenswrapper[4786]: I0127 13:30:09.532577 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:30:10 crc kubenswrapper[4786]: I0127 13:30:10.137084 4786 generic.go:334] "Generic (PLEG): container finished" podID="3d75d515-6418-4134-84d6-ed12a8d75de8" containerID="75c3f54c283baeb08a22790c8ea493cee72c13a638093f699680e8a40b37cc1c" exitCode=0 Jan 27 13:30:10 crc kubenswrapper[4786]: I0127 13:30:10.137282 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kksk" event={"ID":"3d75d515-6418-4134-84d6-ed12a8d75de8","Type":"ContainerDied","Data":"75c3f54c283baeb08a22790c8ea493cee72c13a638093f699680e8a40b37cc1c"} Jan 27 13:30:11 crc kubenswrapper[4786]: I0127 13:30:11.159503 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kksk" event={"ID":"3d75d515-6418-4134-84d6-ed12a8d75de8","Type":"ContainerStarted","Data":"b10e2b4f38514e99a42a94b78670aa1c759c6e6385390185e255e7198c5f8703"} Jan 27 13:30:12 crc kubenswrapper[4786]: I0127 13:30:12.168338 4786 generic.go:334] "Generic (PLEG): container finished" podID="fc35b06a-c49a-45a3-8bac-626ad8256ad2" containerID="ff55188d60a092bcd324496b453bf4b7dd890b3114ee21c84c242a9731df73ab" exitCode=0 Jan 27 13:30:12 crc kubenswrapper[4786]: I0127 13:30:12.168433 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" event={"ID":"fc35b06a-c49a-45a3-8bac-626ad8256ad2","Type":"ContainerDied","Data":"ff55188d60a092bcd324496b453bf4b7dd890b3114ee21c84c242a9731df73ab"} Jan 27 13:30:12 crc kubenswrapper[4786]: I0127 13:30:12.184852 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6kksk" podStartSLOduration=4.625799482 podStartE2EDuration="10.18483624s" podCreationTimestamp="2026-01-27 13:30:02 +0000 UTC" firstStartedPulling="2026-01-27 13:30:05.025689 +0000 UTC m=+1388.236303119" lastFinishedPulling="2026-01-27 13:30:10.584725758 +0000 UTC m=+1393.795339877" observedRunningTime="2026-01-27 13:30:11.183803389 +0000 UTC m=+1394.394417508" watchObservedRunningTime="2026-01-27 13:30:12.18483624 +0000 UTC m=+1395.395450359" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.070847 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.070908 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.457687 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.458201 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.475919 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.478685 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.518123 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.633501 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc35b06a-c49a-45a3-8bac-626ad8256ad2-scripts\") pod \"fc35b06a-c49a-45a3-8bac-626ad8256ad2\" (UID: \"fc35b06a-c49a-45a3-8bac-626ad8256ad2\") " Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.633549 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l77m4\" (UniqueName: \"kubernetes.io/projected/fc35b06a-c49a-45a3-8bac-626ad8256ad2-kube-api-access-l77m4\") pod \"fc35b06a-c49a-45a3-8bac-626ad8256ad2\" (UID: \"fc35b06a-c49a-45a3-8bac-626ad8256ad2\") " Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.633640 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc35b06a-c49a-45a3-8bac-626ad8256ad2-config-data\") pod \"fc35b06a-c49a-45a3-8bac-626ad8256ad2\" (UID: \"fc35b06a-c49a-45a3-8bac-626ad8256ad2\") " Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.634753 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.639212 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc35b06a-c49a-45a3-8bac-626ad8256ad2-scripts" (OuterVolumeSpecName: "scripts") pod "fc35b06a-c49a-45a3-8bac-626ad8256ad2" (UID: "fc35b06a-c49a-45a3-8bac-626ad8256ad2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.639272 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.640124 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.656820 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc35b06a-c49a-45a3-8bac-626ad8256ad2-kube-api-access-l77m4" (OuterVolumeSpecName: "kube-api-access-l77m4") pod "fc35b06a-c49a-45a3-8bac-626ad8256ad2" (UID: "fc35b06a-c49a-45a3-8bac-626ad8256ad2"). InnerVolumeSpecName "kube-api-access-l77m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.660770 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc35b06a-c49a-45a3-8bac-626ad8256ad2-config-data" (OuterVolumeSpecName: "config-data") pod "fc35b06a-c49a-45a3-8bac-626ad8256ad2" (UID: "fc35b06a-c49a-45a3-8bac-626ad8256ad2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.663514 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.735449 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc35b06a-c49a-45a3-8bac-626ad8256ad2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.736006 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc35b06a-c49a-45a3-8bac-626ad8256ad2-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:13 crc kubenswrapper[4786]: I0127 13:30:13.736035 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l77m4\" (UniqueName: \"kubernetes.io/projected/fc35b06a-c49a-45a3-8bac-626ad8256ad2-kube-api-access-l77m4\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:14 crc kubenswrapper[4786]: I0127 13:30:14.113199 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6kksk" podUID="3d75d515-6418-4134-84d6-ed12a8d75de8" containerName="registry-server" probeResult="failure" output=< Jan 27 13:30:14 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Jan 27 13:30:14 crc kubenswrapper[4786]: > Jan 27 13:30:14 crc kubenswrapper[4786]: I0127 13:30:14.183560 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" event={"ID":"fc35b06a-c49a-45a3-8bac-626ad8256ad2","Type":"ContainerDied","Data":"d3189b6b164cc789894c939957365e6f97c6d8e0f6e023b70e1db61519cb6e89"} Jan 27 13:30:14 crc kubenswrapper[4786]: I0127 13:30:14.183620 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3189b6b164cc789894c939957365e6f97c6d8e0f6e023b70e1db61519cb6e89" Jan 27 13:30:14 crc kubenswrapper[4786]: I0127 13:30:14.183799 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65" Jan 27 13:30:14 crc kubenswrapper[4786]: I0127 13:30:14.205031 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:30:14 crc kubenswrapper[4786]: I0127 13:30:14.215088 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:14 crc kubenswrapper[4786]: I0127 13:30:14.462358 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:30:14 crc kubenswrapper[4786]: I0127 13:30:14.463215 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e8c9ecfc-d37c-4c95-9a5f-d12268c17204" containerName="nova-kuttl-api-log" containerID="cri-o://f87b04610ed9ed08e40777e3656f7fccca82d935619ecaea2b476415287fcf49" gracePeriod=30 Jan 27 13:30:14 crc kubenswrapper[4786]: I0127 13:30:14.463289 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e8c9ecfc-d37c-4c95-9a5f-d12268c17204" containerName="nova-kuttl-api-api" containerID="cri-o://1603621613cd078a34ac82295be36240850353e8683ca23e1ab8b408a2746674" gracePeriod=30 Jan 27 13:30:14 crc kubenswrapper[4786]: I0127 13:30:14.470929 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e8c9ecfc-d37c-4c95-9a5f-d12268c17204" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.128:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:30:14 crc kubenswrapper[4786]: I0127 13:30:14.471018 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e8c9ecfc-d37c-4c95-9a5f-d12268c17204" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.128:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:30:14 crc kubenswrapper[4786]: I0127 13:30:14.507103 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:30:14 crc kubenswrapper[4786]: I0127 13:30:14.720787 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="e8b7d7ee-1a81-4742-9a11-0461b89bbce3" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.131:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:30:14 crc kubenswrapper[4786]: I0127 13:30:14.720890 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="e8b7d7ee-1a81-4742-9a11-0461b89bbce3" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.131:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:30:14 crc kubenswrapper[4786]: I0127 13:30:14.749229 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:30:15 crc kubenswrapper[4786]: I0127 13:30:15.191490 4786 generic.go:334] "Generic (PLEG): container finished" podID="e8c9ecfc-d37c-4c95-9a5f-d12268c17204" containerID="f87b04610ed9ed08e40777e3656f7fccca82d935619ecaea2b476415287fcf49" exitCode=143 Jan 27 13:30:15 crc kubenswrapper[4786]: I0127 13:30:15.191679 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e8c9ecfc-d37c-4c95-9a5f-d12268c17204","Type":"ContainerDied","Data":"f87b04610ed9ed08e40777e3656f7fccca82d935619ecaea2b476415287fcf49"} Jan 27 13:30:15 crc kubenswrapper[4786]: I0127 13:30:15.191769 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="e8b7d7ee-1a81-4742-9a11-0461b89bbce3" containerName="nova-kuttl-metadata-log" containerID="cri-o://94522cebe735221313a9d5a8c2cbff7cb319c47c059557c38b7c44cad12c11ec" gracePeriod=30 Jan 27 13:30:15 crc kubenswrapper[4786]: I0127 13:30:15.191839 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="e8b7d7ee-1a81-4742-9a11-0461b89bbce3" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://5b032c1fc07426f543341dd91b1db2a43379ce4c8d10b62a7ceae8fa170cea9c" gracePeriod=30 Jan 27 13:30:16 crc kubenswrapper[4786]: I0127 13:30:16.201989 4786 generic.go:334] "Generic (PLEG): container finished" podID="e8b7d7ee-1a81-4742-9a11-0461b89bbce3" containerID="94522cebe735221313a9d5a8c2cbff7cb319c47c059557c38b7c44cad12c11ec" exitCode=143 Jan 27 13:30:16 crc kubenswrapper[4786]: I0127 13:30:16.202083 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e8b7d7ee-1a81-4742-9a11-0461b89bbce3","Type":"ContainerDied","Data":"94522cebe735221313a9d5a8c2cbff7cb319c47c059557c38b7c44cad12c11ec"} Jan 27 13:30:16 crc kubenswrapper[4786]: I0127 13:30:16.204584 4786 generic.go:334] "Generic (PLEG): container finished" podID="2d22bccf-b474-465b-8a31-d5b965a5448a" containerID="008416ea6a7b9aeb70f9e6f50e083b0dabb18aaccfd570c460f0464be175c06f" exitCode=0 Jan 27 13:30:16 crc kubenswrapper[4786]: I0127 13:30:16.204715 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" event={"ID":"2d22bccf-b474-465b-8a31-d5b965a5448a","Type":"ContainerDied","Data":"008416ea6a7b9aeb70f9e6f50e083b0dabb18aaccfd570c460f0464be175c06f"} Jan 27 13:30:16 crc kubenswrapper[4786]: I0127 13:30:16.204801 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="59a29346-eda2-48ed-9f24-e31d93aa0950" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://7f4fd365aecba422b826b1c3b354936e6344acf4208412fc44cec1fedf4d740e" gracePeriod=30 Jan 27 13:30:17 crc kubenswrapper[4786]: I0127 13:30:17.516749 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" Jan 27 13:30:17 crc kubenswrapper[4786]: I0127 13:30:17.640218 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d22bccf-b474-465b-8a31-d5b965a5448a-config-data\") pod \"2d22bccf-b474-465b-8a31-d5b965a5448a\" (UID: \"2d22bccf-b474-465b-8a31-d5b965a5448a\") " Jan 27 13:30:17 crc kubenswrapper[4786]: I0127 13:30:17.640641 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8n9w\" (UniqueName: \"kubernetes.io/projected/2d22bccf-b474-465b-8a31-d5b965a5448a-kube-api-access-x8n9w\") pod \"2d22bccf-b474-465b-8a31-d5b965a5448a\" (UID: \"2d22bccf-b474-465b-8a31-d5b965a5448a\") " Jan 27 13:30:17 crc kubenswrapper[4786]: I0127 13:30:17.640768 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d22bccf-b474-465b-8a31-d5b965a5448a-scripts\") pod \"2d22bccf-b474-465b-8a31-d5b965a5448a\" (UID: \"2d22bccf-b474-465b-8a31-d5b965a5448a\") " Jan 27 13:30:17 crc kubenswrapper[4786]: I0127 13:30:17.645185 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d22bccf-b474-465b-8a31-d5b965a5448a-scripts" (OuterVolumeSpecName: "scripts") pod "2d22bccf-b474-465b-8a31-d5b965a5448a" (UID: "2d22bccf-b474-465b-8a31-d5b965a5448a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:30:17 crc kubenswrapper[4786]: I0127 13:30:17.645729 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d22bccf-b474-465b-8a31-d5b965a5448a-kube-api-access-x8n9w" (OuterVolumeSpecName: "kube-api-access-x8n9w") pod "2d22bccf-b474-465b-8a31-d5b965a5448a" (UID: "2d22bccf-b474-465b-8a31-d5b965a5448a"). InnerVolumeSpecName "kube-api-access-x8n9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:30:17 crc kubenswrapper[4786]: I0127 13:30:17.663365 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d22bccf-b474-465b-8a31-d5b965a5448a-config-data" (OuterVolumeSpecName: "config-data") pod "2d22bccf-b474-465b-8a31-d5b965a5448a" (UID: "2d22bccf-b474-465b-8a31-d5b965a5448a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:30:17 crc kubenswrapper[4786]: I0127 13:30:17.742140 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d22bccf-b474-465b-8a31-d5b965a5448a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:17 crc kubenswrapper[4786]: I0127 13:30:17.742180 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d22bccf-b474-465b-8a31-d5b965a5448a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:17 crc kubenswrapper[4786]: I0127 13:30:17.742191 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8n9w\" (UniqueName: \"kubernetes.io/projected/2d22bccf-b474-465b-8a31-d5b965a5448a-kube-api-access-x8n9w\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.224388 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" event={"ID":"2d22bccf-b474-465b-8a31-d5b965a5448a","Type":"ContainerDied","Data":"cd22b5c3c277d1378a53cea6a20f44b8845f9c212f9b0231d87a1c6ced4350c4"} Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.224564 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd22b5c3c277d1378a53cea6a20f44b8845f9c212f9b0231d87a1c6ced4350c4" Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.224578 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5" Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.299213 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:30:18 crc kubenswrapper[4786]: E0127 13:30:18.299979 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc35b06a-c49a-45a3-8bac-626ad8256ad2" containerName="nova-manage" Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.300010 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc35b06a-c49a-45a3-8bac-626ad8256ad2" containerName="nova-manage" Jan 27 13:30:18 crc kubenswrapper[4786]: E0127 13:30:18.300035 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d22bccf-b474-465b-8a31-d5b965a5448a" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.300045 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d22bccf-b474-465b-8a31-d5b965a5448a" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.300278 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc35b06a-c49a-45a3-8bac-626ad8256ad2" containerName="nova-manage" Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.300307 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d22bccf-b474-465b-8a31-d5b965a5448a" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.301546 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.304166 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.312235 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.352949 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78658d70-0256-4b13-9804-88758f0e33e5-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"78658d70-0256-4b13-9804-88758f0e33e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.353170 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfmmp\" (UniqueName: \"kubernetes.io/projected/78658d70-0256-4b13-9804-88758f0e33e5-kube-api-access-lfmmp\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"78658d70-0256-4b13-9804-88758f0e33e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.455224 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78658d70-0256-4b13-9804-88758f0e33e5-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"78658d70-0256-4b13-9804-88758f0e33e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.455312 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfmmp\" (UniqueName: \"kubernetes.io/projected/78658d70-0256-4b13-9804-88758f0e33e5-kube-api-access-lfmmp\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"78658d70-0256-4b13-9804-88758f0e33e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.460708 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78658d70-0256-4b13-9804-88758f0e33e5-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"78658d70-0256-4b13-9804-88758f0e33e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.472405 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfmmp\" (UniqueName: \"kubernetes.io/projected/78658d70-0256-4b13-9804-88758f0e33e5-kube-api-access-lfmmp\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"78658d70-0256-4b13-9804-88758f0e33e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:30:18 crc kubenswrapper[4786]: I0127 13:30:18.627316 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:30:18 crc kubenswrapper[4786]: E0127 13:30:18.635535 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7f4fd365aecba422b826b1c3b354936e6344acf4208412fc44cec1fedf4d740e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:30:18 crc kubenswrapper[4786]: E0127 13:30:18.636881 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7f4fd365aecba422b826b1c3b354936e6344acf4208412fc44cec1fedf4d740e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:30:18 crc kubenswrapper[4786]: E0127 13:30:18.638532 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7f4fd365aecba422b826b1c3b354936e6344acf4208412fc44cec1fedf4d740e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:30:18 crc kubenswrapper[4786]: E0127 13:30:18.638580 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="59a29346-eda2-48ed-9f24-e31d93aa0950" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:30:19 crc kubenswrapper[4786]: I0127 13:30:19.038369 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:30:19 crc kubenswrapper[4786]: I0127 13:30:19.235930 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"78658d70-0256-4b13-9804-88758f0e33e5","Type":"ContainerStarted","Data":"9d3b271f28d55e1ab3872a515757f751be0a2b7246f4719493c6e43cbf53057c"} Jan 27 13:30:19 crc kubenswrapper[4786]: I0127 13:30:19.235974 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"78658d70-0256-4b13-9804-88758f0e33e5","Type":"ContainerStarted","Data":"9c4c201a4702d7a864e7e7d5f531a1d648f26576ceec1b4833c3e6393aee8b8b"} Jan 27 13:30:19 crc kubenswrapper[4786]: I0127 13:30:19.236002 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:30:19 crc kubenswrapper[4786]: I0127 13:30:19.238555 4786 generic.go:334] "Generic (PLEG): container finished" podID="59a29346-eda2-48ed-9f24-e31d93aa0950" containerID="7f4fd365aecba422b826b1c3b354936e6344acf4208412fc44cec1fedf4d740e" exitCode=0 Jan 27 13:30:19 crc kubenswrapper[4786]: I0127 13:30:19.238876 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"59a29346-eda2-48ed-9f24-e31d93aa0950","Type":"ContainerDied","Data":"7f4fd365aecba422b826b1c3b354936e6344acf4208412fc44cec1fedf4d740e"} Jan 27 13:30:19 crc kubenswrapper[4786]: I0127 13:30:19.250435 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=1.250390476 podStartE2EDuration="1.250390476s" podCreationTimestamp="2026-01-27 13:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:30:19.250003356 +0000 UTC m=+1402.460617485" watchObservedRunningTime="2026-01-27 13:30:19.250390476 +0000 UTC m=+1402.461004595" Jan 27 13:30:19 crc kubenswrapper[4786]: I0127 13:30:19.538928 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:19 crc kubenswrapper[4786]: I0127 13:30:19.678684 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a29346-eda2-48ed-9f24-e31d93aa0950-config-data\") pod \"59a29346-eda2-48ed-9f24-e31d93aa0950\" (UID: \"59a29346-eda2-48ed-9f24-e31d93aa0950\") " Jan 27 13:30:19 crc kubenswrapper[4786]: I0127 13:30:19.678757 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2kjh\" (UniqueName: \"kubernetes.io/projected/59a29346-eda2-48ed-9f24-e31d93aa0950-kube-api-access-g2kjh\") pod \"59a29346-eda2-48ed-9f24-e31d93aa0950\" (UID: \"59a29346-eda2-48ed-9f24-e31d93aa0950\") " Jan 27 13:30:19 crc kubenswrapper[4786]: I0127 13:30:19.684789 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a29346-eda2-48ed-9f24-e31d93aa0950-kube-api-access-g2kjh" (OuterVolumeSpecName: "kube-api-access-g2kjh") pod "59a29346-eda2-48ed-9f24-e31d93aa0950" (UID: "59a29346-eda2-48ed-9f24-e31d93aa0950"). InnerVolumeSpecName "kube-api-access-g2kjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:30:19 crc kubenswrapper[4786]: I0127 13:30:19.705963 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59a29346-eda2-48ed-9f24-e31d93aa0950-config-data" (OuterVolumeSpecName: "config-data") pod "59a29346-eda2-48ed-9f24-e31d93aa0950" (UID: "59a29346-eda2-48ed-9f24-e31d93aa0950"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:30:19 crc kubenswrapper[4786]: I0127 13:30:19.780838 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a29346-eda2-48ed-9f24-e31d93aa0950-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:19 crc kubenswrapper[4786]: I0127 13:30:19.780873 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2kjh\" (UniqueName: \"kubernetes.io/projected/59a29346-eda2-48ed-9f24-e31d93aa0950-kube-api-access-g2kjh\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.022585 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.084530 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-config-data\") pod \"e8b7d7ee-1a81-4742-9a11-0461b89bbce3\" (UID: \"e8b7d7ee-1a81-4742-9a11-0461b89bbce3\") " Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.084737 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtvjn\" (UniqueName: \"kubernetes.io/projected/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-kube-api-access-xtvjn\") pod \"e8b7d7ee-1a81-4742-9a11-0461b89bbce3\" (UID: \"e8b7d7ee-1a81-4742-9a11-0461b89bbce3\") " Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.084782 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-logs\") pod \"e8b7d7ee-1a81-4742-9a11-0461b89bbce3\" (UID: \"e8b7d7ee-1a81-4742-9a11-0461b89bbce3\") " Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.085906 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-logs" (OuterVolumeSpecName: "logs") pod "e8b7d7ee-1a81-4742-9a11-0461b89bbce3" (UID: "e8b7d7ee-1a81-4742-9a11-0461b89bbce3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.092859 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-kube-api-access-xtvjn" (OuterVolumeSpecName: "kube-api-access-xtvjn") pod "e8b7d7ee-1a81-4742-9a11-0461b89bbce3" (UID: "e8b7d7ee-1a81-4742-9a11-0461b89bbce3"). InnerVolumeSpecName "kube-api-access-xtvjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.106184 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-config-data" (OuterVolumeSpecName: "config-data") pod "e8b7d7ee-1a81-4742-9a11-0461b89bbce3" (UID: "e8b7d7ee-1a81-4742-9a11-0461b89bbce3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.186680 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.186726 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtvjn\" (UniqueName: \"kubernetes.io/projected/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-kube-api-access-xtvjn\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.186739 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8b7d7ee-1a81-4742-9a11-0461b89bbce3-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.203781 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.248996 4786 generic.go:334] "Generic (PLEG): container finished" podID="e8c9ecfc-d37c-4c95-9a5f-d12268c17204" containerID="1603621613cd078a34ac82295be36240850353e8683ca23e1ab8b408a2746674" exitCode=0 Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.249054 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e8c9ecfc-d37c-4c95-9a5f-d12268c17204","Type":"ContainerDied","Data":"1603621613cd078a34ac82295be36240850353e8683ca23e1ab8b408a2746674"} Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.249083 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e8c9ecfc-d37c-4c95-9a5f-d12268c17204","Type":"ContainerDied","Data":"a9cb5f75bfbf57b8b7db26610b94cf04ac0f607bf93d1138682756ff9c6af707"} Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.249102 4786 scope.go:117] "RemoveContainer" containerID="1603621613cd078a34ac82295be36240850353e8683ca23e1ab8b408a2746674" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.249213 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.252936 4786 generic.go:334] "Generic (PLEG): container finished" podID="e8b7d7ee-1a81-4742-9a11-0461b89bbce3" containerID="5b032c1fc07426f543341dd91b1db2a43379ce4c8d10b62a7ceae8fa170cea9c" exitCode=0 Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.252969 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.253028 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e8b7d7ee-1a81-4742-9a11-0461b89bbce3","Type":"ContainerDied","Data":"5b032c1fc07426f543341dd91b1db2a43379ce4c8d10b62a7ceae8fa170cea9c"} Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.253061 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e8b7d7ee-1a81-4742-9a11-0461b89bbce3","Type":"ContainerDied","Data":"9ad56ebab6e4624a7cdc75bf32da83414fb8e72323ebed6bdfd90108cc658cbe"} Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.259320 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.261781 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"59a29346-eda2-48ed-9f24-e31d93aa0950","Type":"ContainerDied","Data":"f454bf8a8f0745f26b8b26bfaa0d1b69a841d6e9485f272fc619ee19b9dd464b"} Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.276861 4786 scope.go:117] "RemoveContainer" containerID="f87b04610ed9ed08e40777e3656f7fccca82d935619ecaea2b476415287fcf49" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.288932 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhhsd\" (UniqueName: \"kubernetes.io/projected/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-kube-api-access-mhhsd\") pod \"e8c9ecfc-d37c-4c95-9a5f-d12268c17204\" (UID: \"e8c9ecfc-d37c-4c95-9a5f-d12268c17204\") " Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.289176 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-logs\") pod \"e8c9ecfc-d37c-4c95-9a5f-d12268c17204\" (UID: \"e8c9ecfc-d37c-4c95-9a5f-d12268c17204\") " Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.289280 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-config-data\") pod \"e8c9ecfc-d37c-4c95-9a5f-d12268c17204\" (UID: \"e8c9ecfc-d37c-4c95-9a5f-d12268c17204\") " Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.290377 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-logs" (OuterVolumeSpecName: "logs") pod "e8c9ecfc-d37c-4c95-9a5f-d12268c17204" (UID: "e8c9ecfc-d37c-4c95-9a5f-d12268c17204"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.298371 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.314484 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-config-data" (OuterVolumeSpecName: "config-data") pod "e8c9ecfc-d37c-4c95-9a5f-d12268c17204" (UID: "e8c9ecfc-d37c-4c95-9a5f-d12268c17204"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.317879 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.319804 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-kube-api-access-mhhsd" (OuterVolumeSpecName: "kube-api-access-mhhsd") pod "e8c9ecfc-d37c-4c95-9a5f-d12268c17204" (UID: "e8c9ecfc-d37c-4c95-9a5f-d12268c17204"). InnerVolumeSpecName "kube-api-access-mhhsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.326465 4786 scope.go:117] "RemoveContainer" containerID="1603621613cd078a34ac82295be36240850353e8683ca23e1ab8b408a2746674" Jan 27 13:30:20 crc kubenswrapper[4786]: E0127 13:30:20.328441 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1603621613cd078a34ac82295be36240850353e8683ca23e1ab8b408a2746674\": container with ID starting with 1603621613cd078a34ac82295be36240850353e8683ca23e1ab8b408a2746674 not found: ID does not exist" containerID="1603621613cd078a34ac82295be36240850353e8683ca23e1ab8b408a2746674" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.328481 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1603621613cd078a34ac82295be36240850353e8683ca23e1ab8b408a2746674"} err="failed to get container status \"1603621613cd078a34ac82295be36240850353e8683ca23e1ab8b408a2746674\": rpc error: code = NotFound desc = could not find container \"1603621613cd078a34ac82295be36240850353e8683ca23e1ab8b408a2746674\": container with ID starting with 1603621613cd078a34ac82295be36240850353e8683ca23e1ab8b408a2746674 not found: ID does not exist" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.328702 4786 scope.go:117] "RemoveContainer" containerID="f87b04610ed9ed08e40777e3656f7fccca82d935619ecaea2b476415287fcf49" Jan 27 13:30:20 crc kubenswrapper[4786]: E0127 13:30:20.335794 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f87b04610ed9ed08e40777e3656f7fccca82d935619ecaea2b476415287fcf49\": container with ID starting with f87b04610ed9ed08e40777e3656f7fccca82d935619ecaea2b476415287fcf49 not found: ID does not exist" containerID="f87b04610ed9ed08e40777e3656f7fccca82d935619ecaea2b476415287fcf49" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.335832 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f87b04610ed9ed08e40777e3656f7fccca82d935619ecaea2b476415287fcf49"} err="failed to get container status \"f87b04610ed9ed08e40777e3656f7fccca82d935619ecaea2b476415287fcf49\": rpc error: code = NotFound desc = could not find container \"f87b04610ed9ed08e40777e3656f7fccca82d935619ecaea2b476415287fcf49\": container with ID starting with f87b04610ed9ed08e40777e3656f7fccca82d935619ecaea2b476415287fcf49 not found: ID does not exist" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.335856 4786 scope.go:117] "RemoveContainer" containerID="5b032c1fc07426f543341dd91b1db2a43379ce4c8d10b62a7ceae8fa170cea9c" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.339468 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:30:20 crc kubenswrapper[4786]: E0127 13:30:20.339905 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b7d7ee-1a81-4742-9a11-0461b89bbce3" containerName="nova-kuttl-metadata-log" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.339924 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b7d7ee-1a81-4742-9a11-0461b89bbce3" containerName="nova-kuttl-metadata-log" Jan 27 13:30:20 crc kubenswrapper[4786]: E0127 13:30:20.339941 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a29346-eda2-48ed-9f24-e31d93aa0950" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.339951 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a29346-eda2-48ed-9f24-e31d93aa0950" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:30:20 crc kubenswrapper[4786]: E0127 13:30:20.339978 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b7d7ee-1a81-4742-9a11-0461b89bbce3" containerName="nova-kuttl-metadata-metadata" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.339989 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b7d7ee-1a81-4742-9a11-0461b89bbce3" containerName="nova-kuttl-metadata-metadata" Jan 27 13:30:20 crc kubenswrapper[4786]: E0127 13:30:20.340009 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c9ecfc-d37c-4c95-9a5f-d12268c17204" containerName="nova-kuttl-api-api" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.340017 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c9ecfc-d37c-4c95-9a5f-d12268c17204" containerName="nova-kuttl-api-api" Jan 27 13:30:20 crc kubenswrapper[4786]: E0127 13:30:20.340033 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c9ecfc-d37c-4c95-9a5f-d12268c17204" containerName="nova-kuttl-api-log" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.340039 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c9ecfc-d37c-4c95-9a5f-d12268c17204" containerName="nova-kuttl-api-log" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.340187 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="59a29346-eda2-48ed-9f24-e31d93aa0950" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.340197 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b7d7ee-1a81-4742-9a11-0461b89bbce3" containerName="nova-kuttl-metadata-metadata" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.340205 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c9ecfc-d37c-4c95-9a5f-d12268c17204" containerName="nova-kuttl-api-log" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.340218 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b7d7ee-1a81-4742-9a11-0461b89bbce3" containerName="nova-kuttl-metadata-log" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.340229 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c9ecfc-d37c-4c95-9a5f-d12268c17204" containerName="nova-kuttl-api-api" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.341323 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.348681 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.348827 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.359579 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.378096 4786 scope.go:117] "RemoveContainer" containerID="94522cebe735221313a9d5a8c2cbff7cb319c47c059557c38b7c44cad12c11ec" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.391827 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpbp4\" (UniqueName: \"kubernetes.io/projected/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-kube-api-access-fpbp4\") pod \"nova-kuttl-metadata-0\" (UID: \"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.391890 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.392055 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.392171 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.392193 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhhsd\" (UniqueName: \"kubernetes.io/projected/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-kube-api-access-mhhsd\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.392216 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8c9ecfc-d37c-4c95-9a5f-d12268c17204-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.401740 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.413450 4786 scope.go:117] "RemoveContainer" containerID="5b032c1fc07426f543341dd91b1db2a43379ce4c8d10b62a7ceae8fa170cea9c" Jan 27 13:30:20 crc kubenswrapper[4786]: E0127 13:30:20.414051 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b032c1fc07426f543341dd91b1db2a43379ce4c8d10b62a7ceae8fa170cea9c\": container with ID starting with 5b032c1fc07426f543341dd91b1db2a43379ce4c8d10b62a7ceae8fa170cea9c not found: ID does not exist" containerID="5b032c1fc07426f543341dd91b1db2a43379ce4c8d10b62a7ceae8fa170cea9c" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.414095 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b032c1fc07426f543341dd91b1db2a43379ce4c8d10b62a7ceae8fa170cea9c"} err="failed to get container status \"5b032c1fc07426f543341dd91b1db2a43379ce4c8d10b62a7ceae8fa170cea9c\": rpc error: code = NotFound desc = could not find container \"5b032c1fc07426f543341dd91b1db2a43379ce4c8d10b62a7ceae8fa170cea9c\": container with ID starting with 5b032c1fc07426f543341dd91b1db2a43379ce4c8d10b62a7ceae8fa170cea9c not found: ID does not exist" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.414121 4786 scope.go:117] "RemoveContainer" containerID="94522cebe735221313a9d5a8c2cbff7cb319c47c059557c38b7c44cad12c11ec" Jan 27 13:30:20 crc kubenswrapper[4786]: E0127 13:30:20.414509 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94522cebe735221313a9d5a8c2cbff7cb319c47c059557c38b7c44cad12c11ec\": container with ID starting with 94522cebe735221313a9d5a8c2cbff7cb319c47c059557c38b7c44cad12c11ec not found: ID does not exist" containerID="94522cebe735221313a9d5a8c2cbff7cb319c47c059557c38b7c44cad12c11ec" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.414593 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94522cebe735221313a9d5a8c2cbff7cb319c47c059557c38b7c44cad12c11ec"} err="failed to get container status \"94522cebe735221313a9d5a8c2cbff7cb319c47c059557c38b7c44cad12c11ec\": rpc error: code = NotFound desc = could not find container \"94522cebe735221313a9d5a8c2cbff7cb319c47c059557c38b7c44cad12c11ec\": container with ID starting with 94522cebe735221313a9d5a8c2cbff7cb319c47c059557c38b7c44cad12c11ec not found: ID does not exist" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.414656 4786 scope.go:117] "RemoveContainer" containerID="7f4fd365aecba422b826b1c3b354936e6344acf4208412fc44cec1fedf4d740e" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.419672 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.421499 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.425935 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.427550 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.493333 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpbp4\" (UniqueName: \"kubernetes.io/projected/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-kube-api-access-fpbp4\") pod \"nova-kuttl-metadata-0\" (UID: \"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.493394 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.493505 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/969b8667-4255-4a18-b097-c9c84b9432e9-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"969b8667-4255-4a18-b097-c9c84b9432e9\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.493682 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.493751 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sbvs\" (UniqueName: \"kubernetes.io/projected/969b8667-4255-4a18-b097-c9c84b9432e9-kube-api-access-5sbvs\") pod \"nova-kuttl-scheduler-0\" (UID: \"969b8667-4255-4a18-b097-c9c84b9432e9\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.495962 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.507562 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.510535 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpbp4\" (UniqueName: \"kubernetes.io/projected/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-kube-api-access-fpbp4\") pod \"nova-kuttl-metadata-0\" (UID: \"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.595168 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.596270 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sbvs\" (UniqueName: \"kubernetes.io/projected/969b8667-4255-4a18-b097-c9c84b9432e9-kube-api-access-5sbvs\") pod \"nova-kuttl-scheduler-0\" (UID: \"969b8667-4255-4a18-b097-c9c84b9432e9\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.598205 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/969b8667-4255-4a18-b097-c9c84b9432e9-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"969b8667-4255-4a18-b097-c9c84b9432e9\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.617398 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/969b8667-4255-4a18-b097-c9c84b9432e9-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"969b8667-4255-4a18-b097-c9c84b9432e9\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.623399 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sbvs\" (UniqueName: \"kubernetes.io/projected/969b8667-4255-4a18-b097-c9c84b9432e9-kube-api-access-5sbvs\") pod \"nova-kuttl-scheduler-0\" (UID: \"969b8667-4255-4a18-b097-c9c84b9432e9\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.624959 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.642749 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.646714 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.649240 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.651689 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.675972 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.700174 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln44s\" (UniqueName: \"kubernetes.io/projected/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-kube-api-access-ln44s\") pod \"nova-kuttl-api-0\" (UID: \"51ede085-aa04-4e76-a2b8-4b5a2fc0693a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.700532 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-logs\") pod \"nova-kuttl-api-0\" (UID: \"51ede085-aa04-4e76-a2b8-4b5a2fc0693a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.700598 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"51ede085-aa04-4e76-a2b8-4b5a2fc0693a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.744715 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.802093 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln44s\" (UniqueName: \"kubernetes.io/projected/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-kube-api-access-ln44s\") pod \"nova-kuttl-api-0\" (UID: \"51ede085-aa04-4e76-a2b8-4b5a2fc0693a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.802163 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-logs\") pod \"nova-kuttl-api-0\" (UID: \"51ede085-aa04-4e76-a2b8-4b5a2fc0693a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.802234 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"51ede085-aa04-4e76-a2b8-4b5a2fc0693a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.803898 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-logs\") pod \"nova-kuttl-api-0\" (UID: \"51ede085-aa04-4e76-a2b8-4b5a2fc0693a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.807509 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"51ede085-aa04-4e76-a2b8-4b5a2fc0693a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.818169 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln44s\" (UniqueName: \"kubernetes.io/projected/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-kube-api-access-ln44s\") pod \"nova-kuttl-api-0\" (UID: \"51ede085-aa04-4e76-a2b8-4b5a2fc0693a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:20 crc kubenswrapper[4786]: I0127 13:30:20.998006 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:21 crc kubenswrapper[4786]: I0127 13:30:21.145223 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:30:21 crc kubenswrapper[4786]: W0127 13:30:21.148555 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ddeddc7_cdfc_4a14_a27c_c55ad2bb3cba.slice/crio-91f83f7b61d69c6971a763226a49810bfaaacf879925800c5af4514c656cbad2 WatchSource:0}: Error finding container 91f83f7b61d69c6971a763226a49810bfaaacf879925800c5af4514c656cbad2: Status 404 returned error can't find the container with id 91f83f7b61d69c6971a763226a49810bfaaacf879925800c5af4514c656cbad2 Jan 27 13:30:21 crc kubenswrapper[4786]: I0127 13:30:21.203803 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:30:21 crc kubenswrapper[4786]: W0127 13:30:21.204768 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod969b8667_4255_4a18_b097_c9c84b9432e9.slice/crio-3cfaa1b25f7de11e3c245c80b8338ca1709593efc92fc7a72732c403342d5ab3 WatchSource:0}: Error finding container 3cfaa1b25f7de11e3c245c80b8338ca1709593efc92fc7a72732c403342d5ab3: Status 404 returned error can't find the container with id 3cfaa1b25f7de11e3c245c80b8338ca1709593efc92fc7a72732c403342d5ab3 Jan 27 13:30:21 crc kubenswrapper[4786]: I0127 13:30:21.279734 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"969b8667-4255-4a18-b097-c9c84b9432e9","Type":"ContainerStarted","Data":"3cfaa1b25f7de11e3c245c80b8338ca1709593efc92fc7a72732c403342d5ab3"} Jan 27 13:30:21 crc kubenswrapper[4786]: I0127 13:30:21.285934 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba","Type":"ContainerStarted","Data":"91f83f7b61d69c6971a763226a49810bfaaacf879925800c5af4514c656cbad2"} Jan 27 13:30:21 crc kubenswrapper[4786]: I0127 13:30:21.448680 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:30:21 crc kubenswrapper[4786]: W0127 13:30:21.456557 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51ede085_aa04_4e76_a2b8_4b5a2fc0693a.slice/crio-422501aa66e65c837f6cbad7f6ae036a553a7988f97b7f160b99dc710a07c7ab WatchSource:0}: Error finding container 422501aa66e65c837f6cbad7f6ae036a553a7988f97b7f160b99dc710a07c7ab: Status 404 returned error can't find the container with id 422501aa66e65c837f6cbad7f6ae036a553a7988f97b7f160b99dc710a07c7ab Jan 27 13:30:21 crc kubenswrapper[4786]: I0127 13:30:21.484492 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59a29346-eda2-48ed-9f24-e31d93aa0950" path="/var/lib/kubelet/pods/59a29346-eda2-48ed-9f24-e31d93aa0950/volumes" Jan 27 13:30:21 crc kubenswrapper[4786]: I0127 13:30:21.485322 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8b7d7ee-1a81-4742-9a11-0461b89bbce3" path="/var/lib/kubelet/pods/e8b7d7ee-1a81-4742-9a11-0461b89bbce3/volumes" Jan 27 13:30:21 crc kubenswrapper[4786]: I0127 13:30:21.485940 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c9ecfc-d37c-4c95-9a5f-d12268c17204" path="/var/lib/kubelet/pods/e8c9ecfc-d37c-4c95-9a5f-d12268c17204/volumes" Jan 27 13:30:22 crc kubenswrapper[4786]: I0127 13:30:22.299368 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"51ede085-aa04-4e76-a2b8-4b5a2fc0693a","Type":"ContainerStarted","Data":"a890b66907f1463392b0503e72ebc14f478746788eb274bdef69cc1247f8deaa"} Jan 27 13:30:22 crc kubenswrapper[4786]: I0127 13:30:22.299634 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"51ede085-aa04-4e76-a2b8-4b5a2fc0693a","Type":"ContainerStarted","Data":"a4a8835705823cc40f843266750928f38b38024f64ded85eba41984aef19a5f4"} Jan 27 13:30:22 crc kubenswrapper[4786]: I0127 13:30:22.299647 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"51ede085-aa04-4e76-a2b8-4b5a2fc0693a","Type":"ContainerStarted","Data":"422501aa66e65c837f6cbad7f6ae036a553a7988f97b7f160b99dc710a07c7ab"} Jan 27 13:30:22 crc kubenswrapper[4786]: I0127 13:30:22.303905 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba","Type":"ContainerStarted","Data":"2f68326b1360c1344aa89c6d7a11d033ebfbbba51ba6d4e3258eb7d18476a824"} Jan 27 13:30:22 crc kubenswrapper[4786]: I0127 13:30:22.303961 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba","Type":"ContainerStarted","Data":"c063d76c91d49ede545dc9680e8f807ea0cb99198978f60633066e7dff9d0106"} Jan 27 13:30:22 crc kubenswrapper[4786]: I0127 13:30:22.305642 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"969b8667-4255-4a18-b097-c9c84b9432e9","Type":"ContainerStarted","Data":"678035256acf3f8452b2af8fad52934d809c945085f0fb4d9a1cfb6e6a47a6fc"} Jan 27 13:30:22 crc kubenswrapper[4786]: I0127 13:30:22.316224 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.316207179 podStartE2EDuration="2.316207179s" podCreationTimestamp="2026-01-27 13:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:30:22.314148702 +0000 UTC m=+1405.524762831" watchObservedRunningTime="2026-01-27 13:30:22.316207179 +0000 UTC m=+1405.526821298" Jan 27 13:30:22 crc kubenswrapper[4786]: I0127 13:30:22.335256 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.33523886 podStartE2EDuration="2.33523886s" podCreationTimestamp="2026-01-27 13:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:30:22.330448139 +0000 UTC m=+1405.541062268" watchObservedRunningTime="2026-01-27 13:30:22.33523886 +0000 UTC m=+1405.545852979" Jan 27 13:30:22 crc kubenswrapper[4786]: I0127 13:30:22.351121 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.351103844 podStartE2EDuration="2.351103844s" podCreationTimestamp="2026-01-27 13:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:30:22.344044501 +0000 UTC m=+1405.554658630" watchObservedRunningTime="2026-01-27 13:30:22.351103844 +0000 UTC m=+1405.561717963" Jan 27 13:30:24 crc kubenswrapper[4786]: I0127 13:30:24.110701 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6kksk" podUID="3d75d515-6418-4134-84d6-ed12a8d75de8" containerName="registry-server" probeResult="failure" output=< Jan 27 13:30:24 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Jan 27 13:30:24 crc kubenswrapper[4786]: > Jan 27 13:30:25 crc kubenswrapper[4786]: I0127 13:30:25.677161 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:25 crc kubenswrapper[4786]: I0127 13:30:25.677547 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:25 crc kubenswrapper[4786]: I0127 13:30:25.745461 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:28 crc kubenswrapper[4786]: I0127 13:30:28.650353 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:30:29 crc kubenswrapper[4786]: I0127 13:30:29.063698 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s"] Jan 27 13:30:29 crc kubenswrapper[4786]: I0127 13:30:29.064662 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" Jan 27 13:30:29 crc kubenswrapper[4786]: I0127 13:30:29.066875 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 27 13:30:29 crc kubenswrapper[4786]: I0127 13:30:29.067142 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 27 13:30:29 crc kubenswrapper[4786]: I0127 13:30:29.077002 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s"] Jan 27 13:30:29 crc kubenswrapper[4786]: I0127 13:30:29.138422 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a87bac46-e96f-4630-a7a7-4399b8465ffc-scripts\") pod \"nova-kuttl-cell1-cell-mapping-b9z5s\" (UID: \"a87bac46-e96f-4630-a7a7-4399b8465ffc\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" Jan 27 13:30:29 crc kubenswrapper[4786]: I0127 13:30:29.138493 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftf76\" (UniqueName: \"kubernetes.io/projected/a87bac46-e96f-4630-a7a7-4399b8465ffc-kube-api-access-ftf76\") pod \"nova-kuttl-cell1-cell-mapping-b9z5s\" (UID: \"a87bac46-e96f-4630-a7a7-4399b8465ffc\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" Jan 27 13:30:29 crc kubenswrapper[4786]: I0127 13:30:29.138527 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a87bac46-e96f-4630-a7a7-4399b8465ffc-config-data\") pod \"nova-kuttl-cell1-cell-mapping-b9z5s\" (UID: \"a87bac46-e96f-4630-a7a7-4399b8465ffc\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" Jan 27 13:30:29 crc kubenswrapper[4786]: I0127 13:30:29.240398 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a87bac46-e96f-4630-a7a7-4399b8465ffc-scripts\") pod \"nova-kuttl-cell1-cell-mapping-b9z5s\" (UID: \"a87bac46-e96f-4630-a7a7-4399b8465ffc\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" Jan 27 13:30:29 crc kubenswrapper[4786]: I0127 13:30:29.240492 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftf76\" (UniqueName: \"kubernetes.io/projected/a87bac46-e96f-4630-a7a7-4399b8465ffc-kube-api-access-ftf76\") pod \"nova-kuttl-cell1-cell-mapping-b9z5s\" (UID: \"a87bac46-e96f-4630-a7a7-4399b8465ffc\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" Jan 27 13:30:29 crc kubenswrapper[4786]: I0127 13:30:29.240521 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a87bac46-e96f-4630-a7a7-4399b8465ffc-config-data\") pod \"nova-kuttl-cell1-cell-mapping-b9z5s\" (UID: \"a87bac46-e96f-4630-a7a7-4399b8465ffc\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" Jan 27 13:30:29 crc kubenswrapper[4786]: I0127 13:30:29.246182 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a87bac46-e96f-4630-a7a7-4399b8465ffc-scripts\") pod \"nova-kuttl-cell1-cell-mapping-b9z5s\" (UID: \"a87bac46-e96f-4630-a7a7-4399b8465ffc\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" Jan 27 13:30:29 crc kubenswrapper[4786]: I0127 13:30:29.246287 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a87bac46-e96f-4630-a7a7-4399b8465ffc-config-data\") pod \"nova-kuttl-cell1-cell-mapping-b9z5s\" (UID: \"a87bac46-e96f-4630-a7a7-4399b8465ffc\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" Jan 27 13:30:29 crc kubenswrapper[4786]: I0127 13:30:29.265360 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftf76\" (UniqueName: \"kubernetes.io/projected/a87bac46-e96f-4630-a7a7-4399b8465ffc-kube-api-access-ftf76\") pod \"nova-kuttl-cell1-cell-mapping-b9z5s\" (UID: \"a87bac46-e96f-4630-a7a7-4399b8465ffc\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" Jan 27 13:30:29 crc kubenswrapper[4786]: I0127 13:30:29.382794 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" Jan 27 13:30:29 crc kubenswrapper[4786]: I0127 13:30:29.835304 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s"] Jan 27 13:30:29 crc kubenswrapper[4786]: W0127 13:30:29.839716 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda87bac46_e96f_4630_a7a7_4399b8465ffc.slice/crio-5ac5cf264ed6ec8a35b6a466bb4a6f85f0a8fa188cf56c4c2d692b5803409f8f WatchSource:0}: Error finding container 5ac5cf264ed6ec8a35b6a466bb4a6f85f0a8fa188cf56c4c2d692b5803409f8f: Status 404 returned error can't find the container with id 5ac5cf264ed6ec8a35b6a466bb4a6f85f0a8fa188cf56c4c2d692b5803409f8f Jan 27 13:30:30 crc kubenswrapper[4786]: I0127 13:30:30.363066 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" event={"ID":"a87bac46-e96f-4630-a7a7-4399b8465ffc","Type":"ContainerStarted","Data":"4603d55f7714e45336cb9d5909533599f47684e611d86a0704f656c68b418b35"} Jan 27 13:30:30 crc kubenswrapper[4786]: I0127 13:30:30.363307 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" event={"ID":"a87bac46-e96f-4630-a7a7-4399b8465ffc","Type":"ContainerStarted","Data":"5ac5cf264ed6ec8a35b6a466bb4a6f85f0a8fa188cf56c4c2d692b5803409f8f"} Jan 27 13:30:30 crc kubenswrapper[4786]: I0127 13:30:30.383487 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" podStartSLOduration=1.383471373 podStartE2EDuration="1.383471373s" podCreationTimestamp="2026-01-27 13:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:30:30.382575929 +0000 UTC m=+1413.593190048" watchObservedRunningTime="2026-01-27 13:30:30.383471373 +0000 UTC m=+1413.594085492" Jan 27 13:30:30 crc kubenswrapper[4786]: I0127 13:30:30.677226 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:30 crc kubenswrapper[4786]: I0127 13:30:30.677498 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:30 crc kubenswrapper[4786]: I0127 13:30:30.744950 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:30 crc kubenswrapper[4786]: I0127 13:30:30.770036 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:30 crc kubenswrapper[4786]: I0127 13:30:30.999340 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:30 crc kubenswrapper[4786]: I0127 13:30:30.999386 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:31 crc kubenswrapper[4786]: I0127 13:30:31.397138 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:31 crc kubenswrapper[4786]: I0127 13:30:31.760021 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.134:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:30:31 crc kubenswrapper[4786]: I0127 13:30:31.760064 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.134:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:30:32 crc kubenswrapper[4786]: I0127 13:30:32.081974 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="51ede085-aa04-4e76-a2b8-4b5a2fc0693a" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.136:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:30:32 crc kubenswrapper[4786]: I0127 13:30:32.082008 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="51ede085-aa04-4e76-a2b8-4b5a2fc0693a" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.136:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:30:33 crc kubenswrapper[4786]: I0127 13:30:33.126714 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:33 crc kubenswrapper[4786]: I0127 13:30:33.179380 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:33 crc kubenswrapper[4786]: I0127 13:30:33.360433 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6kksk"] Jan 27 13:30:34 crc kubenswrapper[4786]: I0127 13:30:34.392496 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6kksk" podUID="3d75d515-6418-4134-84d6-ed12a8d75de8" containerName="registry-server" containerID="cri-o://b10e2b4f38514e99a42a94b78670aa1c759c6e6385390185e255e7198c5f8703" gracePeriod=2 Jan 27 13:30:34 crc kubenswrapper[4786]: I0127 13:30:34.823119 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:34 crc kubenswrapper[4786]: I0127 13:30:34.930436 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6wf9\" (UniqueName: \"kubernetes.io/projected/3d75d515-6418-4134-84d6-ed12a8d75de8-kube-api-access-g6wf9\") pod \"3d75d515-6418-4134-84d6-ed12a8d75de8\" (UID: \"3d75d515-6418-4134-84d6-ed12a8d75de8\") " Jan 27 13:30:34 crc kubenswrapper[4786]: I0127 13:30:34.930522 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d75d515-6418-4134-84d6-ed12a8d75de8-catalog-content\") pod \"3d75d515-6418-4134-84d6-ed12a8d75de8\" (UID: \"3d75d515-6418-4134-84d6-ed12a8d75de8\") " Jan 27 13:30:34 crc kubenswrapper[4786]: I0127 13:30:34.930698 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d75d515-6418-4134-84d6-ed12a8d75de8-utilities\") pod \"3d75d515-6418-4134-84d6-ed12a8d75de8\" (UID: \"3d75d515-6418-4134-84d6-ed12a8d75de8\") " Jan 27 13:30:34 crc kubenswrapper[4786]: I0127 13:30:34.931511 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d75d515-6418-4134-84d6-ed12a8d75de8-utilities" (OuterVolumeSpecName: "utilities") pod "3d75d515-6418-4134-84d6-ed12a8d75de8" (UID: "3d75d515-6418-4134-84d6-ed12a8d75de8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:30:34 crc kubenswrapper[4786]: I0127 13:30:34.938042 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d75d515-6418-4134-84d6-ed12a8d75de8-kube-api-access-g6wf9" (OuterVolumeSpecName: "kube-api-access-g6wf9") pod "3d75d515-6418-4134-84d6-ed12a8d75de8" (UID: "3d75d515-6418-4134-84d6-ed12a8d75de8"). InnerVolumeSpecName "kube-api-access-g6wf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.030672 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d75d515-6418-4134-84d6-ed12a8d75de8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d75d515-6418-4134-84d6-ed12a8d75de8" (UID: "3d75d515-6418-4134-84d6-ed12a8d75de8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.032979 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d75d515-6418-4134-84d6-ed12a8d75de8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.033049 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6wf9\" (UniqueName: \"kubernetes.io/projected/3d75d515-6418-4134-84d6-ed12a8d75de8-kube-api-access-g6wf9\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.033067 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d75d515-6418-4134-84d6-ed12a8d75de8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.402329 4786 generic.go:334] "Generic (PLEG): container finished" podID="a87bac46-e96f-4630-a7a7-4399b8465ffc" containerID="4603d55f7714e45336cb9d5909533599f47684e611d86a0704f656c68b418b35" exitCode=0 Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.402441 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" event={"ID":"a87bac46-e96f-4630-a7a7-4399b8465ffc","Type":"ContainerDied","Data":"4603d55f7714e45336cb9d5909533599f47684e611d86a0704f656c68b418b35"} Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.406283 4786 generic.go:334] "Generic (PLEG): container finished" podID="3d75d515-6418-4134-84d6-ed12a8d75de8" containerID="b10e2b4f38514e99a42a94b78670aa1c759c6e6385390185e255e7198c5f8703" exitCode=0 Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.406326 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kksk" event={"ID":"3d75d515-6418-4134-84d6-ed12a8d75de8","Type":"ContainerDied","Data":"b10e2b4f38514e99a42a94b78670aa1c759c6e6385390185e255e7198c5f8703"} Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.406338 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kksk" Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.406544 4786 scope.go:117] "RemoveContainer" containerID="b10e2b4f38514e99a42a94b78670aa1c759c6e6385390185e255e7198c5f8703" Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.406446 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kksk" event={"ID":"3d75d515-6418-4134-84d6-ed12a8d75de8","Type":"ContainerDied","Data":"dc168da1e078d7419a6477a85b690868805b6a15eccd01426782ea363a2bc39e"} Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.427156 4786 scope.go:117] "RemoveContainer" containerID="75c3f54c283baeb08a22790c8ea493cee72c13a638093f699680e8a40b37cc1c" Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.444050 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6kksk"] Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.451254 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6kksk"] Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.459135 4786 scope.go:117] "RemoveContainer" containerID="2684a5fe3ad9e6feb668b253a5391a7bfe47e7cf1c293928ec37fbbb7c9f95e1" Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.489215 4786 scope.go:117] "RemoveContainer" containerID="b10e2b4f38514e99a42a94b78670aa1c759c6e6385390185e255e7198c5f8703" Jan 27 13:30:35 crc kubenswrapper[4786]: E0127 13:30:35.489943 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b10e2b4f38514e99a42a94b78670aa1c759c6e6385390185e255e7198c5f8703\": container with ID starting with b10e2b4f38514e99a42a94b78670aa1c759c6e6385390185e255e7198c5f8703 not found: ID does not exist" containerID="b10e2b4f38514e99a42a94b78670aa1c759c6e6385390185e255e7198c5f8703" Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.489974 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b10e2b4f38514e99a42a94b78670aa1c759c6e6385390185e255e7198c5f8703"} err="failed to get container status \"b10e2b4f38514e99a42a94b78670aa1c759c6e6385390185e255e7198c5f8703\": rpc error: code = NotFound desc = could not find container \"b10e2b4f38514e99a42a94b78670aa1c759c6e6385390185e255e7198c5f8703\": container with ID starting with b10e2b4f38514e99a42a94b78670aa1c759c6e6385390185e255e7198c5f8703 not found: ID does not exist" Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.489997 4786 scope.go:117] "RemoveContainer" containerID="75c3f54c283baeb08a22790c8ea493cee72c13a638093f699680e8a40b37cc1c" Jan 27 13:30:35 crc kubenswrapper[4786]: E0127 13:30:35.491041 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75c3f54c283baeb08a22790c8ea493cee72c13a638093f699680e8a40b37cc1c\": container with ID starting with 75c3f54c283baeb08a22790c8ea493cee72c13a638093f699680e8a40b37cc1c not found: ID does not exist" containerID="75c3f54c283baeb08a22790c8ea493cee72c13a638093f699680e8a40b37cc1c" Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.491080 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75c3f54c283baeb08a22790c8ea493cee72c13a638093f699680e8a40b37cc1c"} err="failed to get container status \"75c3f54c283baeb08a22790c8ea493cee72c13a638093f699680e8a40b37cc1c\": rpc error: code = NotFound desc = could not find container \"75c3f54c283baeb08a22790c8ea493cee72c13a638093f699680e8a40b37cc1c\": container with ID starting with 75c3f54c283baeb08a22790c8ea493cee72c13a638093f699680e8a40b37cc1c not found: ID does not exist" Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.491106 4786 scope.go:117] "RemoveContainer" containerID="2684a5fe3ad9e6feb668b253a5391a7bfe47e7cf1c293928ec37fbbb7c9f95e1" Jan 27 13:30:35 crc kubenswrapper[4786]: E0127 13:30:35.491722 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2684a5fe3ad9e6feb668b253a5391a7bfe47e7cf1c293928ec37fbbb7c9f95e1\": container with ID starting with 2684a5fe3ad9e6feb668b253a5391a7bfe47e7cf1c293928ec37fbbb7c9f95e1 not found: ID does not exist" containerID="2684a5fe3ad9e6feb668b253a5391a7bfe47e7cf1c293928ec37fbbb7c9f95e1" Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.491751 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2684a5fe3ad9e6feb668b253a5391a7bfe47e7cf1c293928ec37fbbb7c9f95e1"} err="failed to get container status \"2684a5fe3ad9e6feb668b253a5391a7bfe47e7cf1c293928ec37fbbb7c9f95e1\": rpc error: code = NotFound desc = could not find container \"2684a5fe3ad9e6feb668b253a5391a7bfe47e7cf1c293928ec37fbbb7c9f95e1\": container with ID starting with 2684a5fe3ad9e6feb668b253a5391a7bfe47e7cf1c293928ec37fbbb7c9f95e1 not found: ID does not exist" Jan 27 13:30:35 crc kubenswrapper[4786]: I0127 13:30:35.491952 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d75d515-6418-4134-84d6-ed12a8d75de8" path="/var/lib/kubelet/pods/3d75d515-6418-4134-84d6-ed12a8d75de8/volumes" Jan 27 13:30:36 crc kubenswrapper[4786]: I0127 13:30:36.693184 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" Jan 27 13:30:36 crc kubenswrapper[4786]: I0127 13:30:36.759229 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftf76\" (UniqueName: \"kubernetes.io/projected/a87bac46-e96f-4630-a7a7-4399b8465ffc-kube-api-access-ftf76\") pod \"a87bac46-e96f-4630-a7a7-4399b8465ffc\" (UID: \"a87bac46-e96f-4630-a7a7-4399b8465ffc\") " Jan 27 13:30:36 crc kubenswrapper[4786]: I0127 13:30:36.759320 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a87bac46-e96f-4630-a7a7-4399b8465ffc-scripts\") pod \"a87bac46-e96f-4630-a7a7-4399b8465ffc\" (UID: \"a87bac46-e96f-4630-a7a7-4399b8465ffc\") " Jan 27 13:30:36 crc kubenswrapper[4786]: I0127 13:30:36.759559 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a87bac46-e96f-4630-a7a7-4399b8465ffc-config-data\") pod \"a87bac46-e96f-4630-a7a7-4399b8465ffc\" (UID: \"a87bac46-e96f-4630-a7a7-4399b8465ffc\") " Jan 27 13:30:36 crc kubenswrapper[4786]: I0127 13:30:36.763829 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a87bac46-e96f-4630-a7a7-4399b8465ffc-kube-api-access-ftf76" (OuterVolumeSpecName: "kube-api-access-ftf76") pod "a87bac46-e96f-4630-a7a7-4399b8465ffc" (UID: "a87bac46-e96f-4630-a7a7-4399b8465ffc"). InnerVolumeSpecName "kube-api-access-ftf76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:30:36 crc kubenswrapper[4786]: I0127 13:30:36.763977 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a87bac46-e96f-4630-a7a7-4399b8465ffc-scripts" (OuterVolumeSpecName: "scripts") pod "a87bac46-e96f-4630-a7a7-4399b8465ffc" (UID: "a87bac46-e96f-4630-a7a7-4399b8465ffc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:30:36 crc kubenswrapper[4786]: I0127 13:30:36.781989 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a87bac46-e96f-4630-a7a7-4399b8465ffc-config-data" (OuterVolumeSpecName: "config-data") pod "a87bac46-e96f-4630-a7a7-4399b8465ffc" (UID: "a87bac46-e96f-4630-a7a7-4399b8465ffc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:30:36 crc kubenswrapper[4786]: I0127 13:30:36.861029 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a87bac46-e96f-4630-a7a7-4399b8465ffc-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:36 crc kubenswrapper[4786]: I0127 13:30:36.861070 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftf76\" (UniqueName: \"kubernetes.io/projected/a87bac46-e96f-4630-a7a7-4399b8465ffc-kube-api-access-ftf76\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:36 crc kubenswrapper[4786]: I0127 13:30:36.861083 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a87bac46-e96f-4630-a7a7-4399b8465ffc-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:37 crc kubenswrapper[4786]: I0127 13:30:37.424033 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" event={"ID":"a87bac46-e96f-4630-a7a7-4399b8465ffc","Type":"ContainerDied","Data":"5ac5cf264ed6ec8a35b6a466bb4a6f85f0a8fa188cf56c4c2d692b5803409f8f"} Jan 27 13:30:37 crc kubenswrapper[4786]: I0127 13:30:37.424597 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ac5cf264ed6ec8a35b6a466bb4a6f85f0a8fa188cf56c4c2d692b5803409f8f" Jan 27 13:30:37 crc kubenswrapper[4786]: I0127 13:30:37.424098 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s" Jan 27 13:30:37 crc kubenswrapper[4786]: I0127 13:30:37.610371 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:30:37 crc kubenswrapper[4786]: I0127 13:30:37.610765 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="51ede085-aa04-4e76-a2b8-4b5a2fc0693a" containerName="nova-kuttl-api-log" containerID="cri-o://a4a8835705823cc40f843266750928f38b38024f64ded85eba41984aef19a5f4" gracePeriod=30 Jan 27 13:30:37 crc kubenswrapper[4786]: I0127 13:30:37.610882 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="51ede085-aa04-4e76-a2b8-4b5a2fc0693a" containerName="nova-kuttl-api-api" containerID="cri-o://a890b66907f1463392b0503e72ebc14f478746788eb274bdef69cc1247f8deaa" gracePeriod=30 Jan 27 13:30:37 crc kubenswrapper[4786]: I0127 13:30:37.686429 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:30:37 crc kubenswrapper[4786]: I0127 13:30:37.686722 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="969b8667-4255-4a18-b097-c9c84b9432e9" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://678035256acf3f8452b2af8fad52934d809c945085f0fb4d9a1cfb6e6a47a6fc" gracePeriod=30 Jan 27 13:30:37 crc kubenswrapper[4786]: I0127 13:30:37.703934 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:30:37 crc kubenswrapper[4786]: I0127 13:30:37.704228 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba" containerName="nova-kuttl-metadata-log" containerID="cri-o://c063d76c91d49ede545dc9680e8f807ea0cb99198978f60633066e7dff9d0106" gracePeriod=30 Jan 27 13:30:37 crc kubenswrapper[4786]: I0127 13:30:37.704372 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://2f68326b1360c1344aa89c6d7a11d033ebfbbba51ba6d4e3258eb7d18476a824" gracePeriod=30 Jan 27 13:30:38 crc kubenswrapper[4786]: I0127 13:30:38.433187 4786 generic.go:334] "Generic (PLEG): container finished" podID="51ede085-aa04-4e76-a2b8-4b5a2fc0693a" containerID="a4a8835705823cc40f843266750928f38b38024f64ded85eba41984aef19a5f4" exitCode=143 Jan 27 13:30:38 crc kubenswrapper[4786]: I0127 13:30:38.433449 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"51ede085-aa04-4e76-a2b8-4b5a2fc0693a","Type":"ContainerDied","Data":"a4a8835705823cc40f843266750928f38b38024f64ded85eba41984aef19a5f4"} Jan 27 13:30:38 crc kubenswrapper[4786]: I0127 13:30:38.434948 4786 generic.go:334] "Generic (PLEG): container finished" podID="5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba" containerID="c063d76c91d49ede545dc9680e8f807ea0cb99198978f60633066e7dff9d0106" exitCode=143 Jan 27 13:30:38 crc kubenswrapper[4786]: I0127 13:30:38.435031 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba","Type":"ContainerDied","Data":"c063d76c91d49ede545dc9680e8f807ea0cb99198978f60633066e7dff9d0106"} Jan 27 13:30:39 crc kubenswrapper[4786]: I0127 13:30:39.532806 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:30:39 crc kubenswrapper[4786]: I0127 13:30:39.533536 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:30:39 crc kubenswrapper[4786]: I0127 13:30:39.533662 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:30:39 crc kubenswrapper[4786]: I0127 13:30:39.534305 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3ed072ebc16e765b7905a1ff330ce7f66cb1fdc6d65470e94dc155e6c8bb633"} pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:30:39 crc kubenswrapper[4786]: I0127 13:30:39.534446 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" containerID="cri-o://f3ed072ebc16e765b7905a1ff330ce7f66cb1fdc6d65470e94dc155e6c8bb633" gracePeriod=600 Jan 27 13:30:40 crc kubenswrapper[4786]: I0127 13:30:40.453209 4786 generic.go:334] "Generic (PLEG): container finished" podID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerID="f3ed072ebc16e765b7905a1ff330ce7f66cb1fdc6d65470e94dc155e6c8bb633" exitCode=0 Jan 27 13:30:40 crc kubenswrapper[4786]: I0127 13:30:40.453272 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerDied","Data":"f3ed072ebc16e765b7905a1ff330ce7f66cb1fdc6d65470e94dc155e6c8bb633"} Jan 27 13:30:40 crc kubenswrapper[4786]: I0127 13:30:40.453761 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerStarted","Data":"2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906"} Jan 27 13:30:40 crc kubenswrapper[4786]: I0127 13:30:40.453776 4786 scope.go:117] "RemoveContainer" containerID="7a272494933f607d1d0ff23a4dfbd30c05a19b1fad6cb442bff9296b566a9151" Jan 27 13:30:40 crc kubenswrapper[4786]: E0127 13:30:40.748331 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="678035256acf3f8452b2af8fad52934d809c945085f0fb4d9a1cfb6e6a47a6fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:30:40 crc kubenswrapper[4786]: E0127 13:30:40.749569 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="678035256acf3f8452b2af8fad52934d809c945085f0fb4d9a1cfb6e6a47a6fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:30:40 crc kubenswrapper[4786]: E0127 13:30:40.750653 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="678035256acf3f8452b2af8fad52934d809c945085f0fb4d9a1cfb6e6a47a6fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:30:40 crc kubenswrapper[4786]: E0127 13:30:40.750717 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="969b8667-4255-4a18-b097-c9c84b9432e9" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.248761 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.329943 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-logs\") pod \"51ede085-aa04-4e76-a2b8-4b5a2fc0693a\" (UID: \"51ede085-aa04-4e76-a2b8-4b5a2fc0693a\") " Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.330005 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-config-data\") pod \"51ede085-aa04-4e76-a2b8-4b5a2fc0693a\" (UID: \"51ede085-aa04-4e76-a2b8-4b5a2fc0693a\") " Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.330056 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln44s\" (UniqueName: \"kubernetes.io/projected/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-kube-api-access-ln44s\") pod \"51ede085-aa04-4e76-a2b8-4b5a2fc0693a\" (UID: \"51ede085-aa04-4e76-a2b8-4b5a2fc0693a\") " Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.332258 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-logs" (OuterVolumeSpecName: "logs") pod "51ede085-aa04-4e76-a2b8-4b5a2fc0693a" (UID: "51ede085-aa04-4e76-a2b8-4b5a2fc0693a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.342901 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-kube-api-access-ln44s" (OuterVolumeSpecName: "kube-api-access-ln44s") pod "51ede085-aa04-4e76-a2b8-4b5a2fc0693a" (UID: "51ede085-aa04-4e76-a2b8-4b5a2fc0693a"). InnerVolumeSpecName "kube-api-access-ln44s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.363644 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-config-data" (OuterVolumeSpecName: "config-data") pod "51ede085-aa04-4e76-a2b8-4b5a2fc0693a" (UID: "51ede085-aa04-4e76-a2b8-4b5a2fc0693a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.386424 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.431838 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-logs\") pod \"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba\" (UID: \"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba\") " Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.431896 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpbp4\" (UniqueName: \"kubernetes.io/projected/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-kube-api-access-fpbp4\") pod \"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba\" (UID: \"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba\") " Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.431976 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-config-data\") pod \"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba\" (UID: \"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba\") " Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.434346 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.434381 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.434396 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln44s\" (UniqueName: \"kubernetes.io/projected/51ede085-aa04-4e76-a2b8-4b5a2fc0693a-kube-api-access-ln44s\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.434787 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-logs" (OuterVolumeSpecName: "logs") pod "5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba" (UID: "5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.437566 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-kube-api-access-fpbp4" (OuterVolumeSpecName: "kube-api-access-fpbp4") pod "5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba" (UID: "5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba"). InnerVolumeSpecName "kube-api-access-fpbp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.454300 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-config-data" (OuterVolumeSpecName: "config-data") pod "5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba" (UID: "5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.463525 4786 generic.go:334] "Generic (PLEG): container finished" podID="5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba" containerID="2f68326b1360c1344aa89c6d7a11d033ebfbbba51ba6d4e3258eb7d18476a824" exitCode=0 Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.463642 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.463658 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba","Type":"ContainerDied","Data":"2f68326b1360c1344aa89c6d7a11d033ebfbbba51ba6d4e3258eb7d18476a824"} Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.463830 4786 scope.go:117] "RemoveContainer" containerID="2f68326b1360c1344aa89c6d7a11d033ebfbbba51ba6d4e3258eb7d18476a824" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.473078 4786 generic.go:334] "Generic (PLEG): container finished" podID="51ede085-aa04-4e76-a2b8-4b5a2fc0693a" containerID="a890b66907f1463392b0503e72ebc14f478746788eb274bdef69cc1247f8deaa" exitCode=0 Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.473161 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.474979 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba","Type":"ContainerDied","Data":"91f83f7b61d69c6971a763226a49810bfaaacf879925800c5af4514c656cbad2"} Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.475020 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"51ede085-aa04-4e76-a2b8-4b5a2fc0693a","Type":"ContainerDied","Data":"a890b66907f1463392b0503e72ebc14f478746788eb274bdef69cc1247f8deaa"} Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.475036 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"51ede085-aa04-4e76-a2b8-4b5a2fc0693a","Type":"ContainerDied","Data":"422501aa66e65c837f6cbad7f6ae036a553a7988f97b7f160b99dc710a07c7ab"} Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.497225 4786 scope.go:117] "RemoveContainer" containerID="c063d76c91d49ede545dc9680e8f807ea0cb99198978f60633066e7dff9d0106" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.512084 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.537886 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.537919 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpbp4\" (UniqueName: \"kubernetes.io/projected/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-kube-api-access-fpbp4\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.537930 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.543264 4786 scope.go:117] "RemoveContainer" containerID="2f68326b1360c1344aa89c6d7a11d033ebfbbba51ba6d4e3258eb7d18476a824" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.543439 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:30:41 crc kubenswrapper[4786]: E0127 13:30:41.551235 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f68326b1360c1344aa89c6d7a11d033ebfbbba51ba6d4e3258eb7d18476a824\": container with ID starting with 2f68326b1360c1344aa89c6d7a11d033ebfbbba51ba6d4e3258eb7d18476a824 not found: ID does not exist" containerID="2f68326b1360c1344aa89c6d7a11d033ebfbbba51ba6d4e3258eb7d18476a824" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.551291 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f68326b1360c1344aa89c6d7a11d033ebfbbba51ba6d4e3258eb7d18476a824"} err="failed to get container status \"2f68326b1360c1344aa89c6d7a11d033ebfbbba51ba6d4e3258eb7d18476a824\": rpc error: code = NotFound desc = could not find container \"2f68326b1360c1344aa89c6d7a11d033ebfbbba51ba6d4e3258eb7d18476a824\": container with ID starting with 2f68326b1360c1344aa89c6d7a11d033ebfbbba51ba6d4e3258eb7d18476a824 not found: ID does not exist" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.551315 4786 scope.go:117] "RemoveContainer" containerID="c063d76c91d49ede545dc9680e8f807ea0cb99198978f60633066e7dff9d0106" Jan 27 13:30:41 crc kubenswrapper[4786]: E0127 13:30:41.553513 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c063d76c91d49ede545dc9680e8f807ea0cb99198978f60633066e7dff9d0106\": container with ID starting with c063d76c91d49ede545dc9680e8f807ea0cb99198978f60633066e7dff9d0106 not found: ID does not exist" containerID="c063d76c91d49ede545dc9680e8f807ea0cb99198978f60633066e7dff9d0106" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.553561 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c063d76c91d49ede545dc9680e8f807ea0cb99198978f60633066e7dff9d0106"} err="failed to get container status \"c063d76c91d49ede545dc9680e8f807ea0cb99198978f60633066e7dff9d0106\": rpc error: code = NotFound desc = could not find container \"c063d76c91d49ede545dc9680e8f807ea0cb99198978f60633066e7dff9d0106\": container with ID starting with c063d76c91d49ede545dc9680e8f807ea0cb99198978f60633066e7dff9d0106 not found: ID does not exist" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.553586 4786 scope.go:117] "RemoveContainer" containerID="a890b66907f1463392b0503e72ebc14f478746788eb274bdef69cc1247f8deaa" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.564098 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:30:41 crc kubenswrapper[4786]: E0127 13:30:41.564758 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ede085-aa04-4e76-a2b8-4b5a2fc0693a" containerName="nova-kuttl-api-log" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.564777 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ede085-aa04-4e76-a2b8-4b5a2fc0693a" containerName="nova-kuttl-api-log" Jan 27 13:30:41 crc kubenswrapper[4786]: E0127 13:30:41.564792 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d75d515-6418-4134-84d6-ed12a8d75de8" containerName="extract-content" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.564798 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d75d515-6418-4134-84d6-ed12a8d75de8" containerName="extract-content" Jan 27 13:30:41 crc kubenswrapper[4786]: E0127 13:30:41.564812 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba" containerName="nova-kuttl-metadata-metadata" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.564819 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba" containerName="nova-kuttl-metadata-metadata" Jan 27 13:30:41 crc kubenswrapper[4786]: E0127 13:30:41.564832 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba" containerName="nova-kuttl-metadata-log" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.564838 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba" containerName="nova-kuttl-metadata-log" Jan 27 13:30:41 crc kubenswrapper[4786]: E0127 13:30:41.564850 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d75d515-6418-4134-84d6-ed12a8d75de8" containerName="extract-utilities" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.564856 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d75d515-6418-4134-84d6-ed12a8d75de8" containerName="extract-utilities" Jan 27 13:30:41 crc kubenswrapper[4786]: E0127 13:30:41.564866 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d75d515-6418-4134-84d6-ed12a8d75de8" containerName="registry-server" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.564872 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d75d515-6418-4134-84d6-ed12a8d75de8" containerName="registry-server" Jan 27 13:30:41 crc kubenswrapper[4786]: E0127 13:30:41.564887 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ede085-aa04-4e76-a2b8-4b5a2fc0693a" containerName="nova-kuttl-api-api" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.564893 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ede085-aa04-4e76-a2b8-4b5a2fc0693a" containerName="nova-kuttl-api-api" Jan 27 13:30:41 crc kubenswrapper[4786]: E0127 13:30:41.564923 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87bac46-e96f-4630-a7a7-4399b8465ffc" containerName="nova-manage" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.564931 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87bac46-e96f-4630-a7a7-4399b8465ffc" containerName="nova-manage" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.565153 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba" containerName="nova-kuttl-metadata-log" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.565171 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ede085-aa04-4e76-a2b8-4b5a2fc0693a" containerName="nova-kuttl-api-log" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.565178 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a87bac46-e96f-4630-a7a7-4399b8465ffc" containerName="nova-manage" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.565188 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba" containerName="nova-kuttl-metadata-metadata" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.565197 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d75d515-6418-4134-84d6-ed12a8d75de8" containerName="registry-server" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.565206 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ede085-aa04-4e76-a2b8-4b5a2fc0693a" containerName="nova-kuttl-api-api" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.566546 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.569694 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.579325 4786 scope.go:117] "RemoveContainer" containerID="a4a8835705823cc40f843266750928f38b38024f64ded85eba41984aef19a5f4" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.581096 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.591878 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.603296 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.614941 4786 scope.go:117] "RemoveContainer" containerID="a890b66907f1463392b0503e72ebc14f478746788eb274bdef69cc1247f8deaa" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.615183 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:30:41 crc kubenswrapper[4786]: E0127 13:30:41.617222 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a890b66907f1463392b0503e72ebc14f478746788eb274bdef69cc1247f8deaa\": container with ID starting with a890b66907f1463392b0503e72ebc14f478746788eb274bdef69cc1247f8deaa not found: ID does not exist" containerID="a890b66907f1463392b0503e72ebc14f478746788eb274bdef69cc1247f8deaa" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.617273 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a890b66907f1463392b0503e72ebc14f478746788eb274bdef69cc1247f8deaa"} err="failed to get container status \"a890b66907f1463392b0503e72ebc14f478746788eb274bdef69cc1247f8deaa\": rpc error: code = NotFound desc = could not find container \"a890b66907f1463392b0503e72ebc14f478746788eb274bdef69cc1247f8deaa\": container with ID starting with a890b66907f1463392b0503e72ebc14f478746788eb274bdef69cc1247f8deaa not found: ID does not exist" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.617310 4786 scope.go:117] "RemoveContainer" containerID="a4a8835705823cc40f843266750928f38b38024f64ded85eba41984aef19a5f4" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.617327 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:41 crc kubenswrapper[4786]: E0127 13:30:41.619072 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4a8835705823cc40f843266750928f38b38024f64ded85eba41984aef19a5f4\": container with ID starting with a4a8835705823cc40f843266750928f38b38024f64ded85eba41984aef19a5f4 not found: ID does not exist" containerID="a4a8835705823cc40f843266750928f38b38024f64ded85eba41984aef19a5f4" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.619112 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4a8835705823cc40f843266750928f38b38024f64ded85eba41984aef19a5f4"} err="failed to get container status \"a4a8835705823cc40f843266750928f38b38024f64ded85eba41984aef19a5f4\": rpc error: code = NotFound desc = could not find container \"a4a8835705823cc40f843266750928f38b38024f64ded85eba41984aef19a5f4\": container with ID starting with a4a8835705823cc40f843266750928f38b38024f64ded85eba41984aef19a5f4 not found: ID does not exist" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.619287 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.626370 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.641331 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9gx5\" (UniqueName: \"kubernetes.io/projected/701d1213-1604-4347-989c-98f88c361f06-kube-api-access-h9gx5\") pod \"nova-kuttl-api-0\" (UID: \"701d1213-1604-4347-989c-98f88c361f06\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.641397 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4537cb47-5182-4574-bbe4-9edd9509e0f1-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"4537cb47-5182-4574-bbe4-9edd9509e0f1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.641433 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4537cb47-5182-4574-bbe4-9edd9509e0f1-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"4537cb47-5182-4574-bbe4-9edd9509e0f1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.641468 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dlhl\" (UniqueName: \"kubernetes.io/projected/4537cb47-5182-4574-bbe4-9edd9509e0f1-kube-api-access-6dlhl\") pod \"nova-kuttl-metadata-0\" (UID: \"4537cb47-5182-4574-bbe4-9edd9509e0f1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.641493 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/701d1213-1604-4347-989c-98f88c361f06-config-data\") pod \"nova-kuttl-api-0\" (UID: \"701d1213-1604-4347-989c-98f88c361f06\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.641570 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/701d1213-1604-4347-989c-98f88c361f06-logs\") pod \"nova-kuttl-api-0\" (UID: \"701d1213-1604-4347-989c-98f88c361f06\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.742924 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/701d1213-1604-4347-989c-98f88c361f06-config-data\") pod \"nova-kuttl-api-0\" (UID: \"701d1213-1604-4347-989c-98f88c361f06\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.742997 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/701d1213-1604-4347-989c-98f88c361f06-logs\") pod \"nova-kuttl-api-0\" (UID: \"701d1213-1604-4347-989c-98f88c361f06\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.743084 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9gx5\" (UniqueName: \"kubernetes.io/projected/701d1213-1604-4347-989c-98f88c361f06-kube-api-access-h9gx5\") pod \"nova-kuttl-api-0\" (UID: \"701d1213-1604-4347-989c-98f88c361f06\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.743123 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4537cb47-5182-4574-bbe4-9edd9509e0f1-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"4537cb47-5182-4574-bbe4-9edd9509e0f1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.743161 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4537cb47-5182-4574-bbe4-9edd9509e0f1-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"4537cb47-5182-4574-bbe4-9edd9509e0f1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.743196 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dlhl\" (UniqueName: \"kubernetes.io/projected/4537cb47-5182-4574-bbe4-9edd9509e0f1-kube-api-access-6dlhl\") pod \"nova-kuttl-metadata-0\" (UID: \"4537cb47-5182-4574-bbe4-9edd9509e0f1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.743862 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/701d1213-1604-4347-989c-98f88c361f06-logs\") pod \"nova-kuttl-api-0\" (UID: \"701d1213-1604-4347-989c-98f88c361f06\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.745687 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4537cb47-5182-4574-bbe4-9edd9509e0f1-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"4537cb47-5182-4574-bbe4-9edd9509e0f1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.747728 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4537cb47-5182-4574-bbe4-9edd9509e0f1-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"4537cb47-5182-4574-bbe4-9edd9509e0f1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.748415 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/701d1213-1604-4347-989c-98f88c361f06-config-data\") pod \"nova-kuttl-api-0\" (UID: \"701d1213-1604-4347-989c-98f88c361f06\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.762864 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9gx5\" (UniqueName: \"kubernetes.io/projected/701d1213-1604-4347-989c-98f88c361f06-kube-api-access-h9gx5\") pod \"nova-kuttl-api-0\" (UID: \"701d1213-1604-4347-989c-98f88c361f06\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.763468 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dlhl\" (UniqueName: \"kubernetes.io/projected/4537cb47-5182-4574-bbe4-9edd9509e0f1-kube-api-access-6dlhl\") pod \"nova-kuttl-metadata-0\" (UID: \"4537cb47-5182-4574-bbe4-9edd9509e0f1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.890073 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:41 crc kubenswrapper[4786]: I0127 13:30:41.945010 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:42 crc kubenswrapper[4786]: I0127 13:30:42.332533 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:30:42 crc kubenswrapper[4786]: W0127 13:30:42.341974 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4537cb47_5182_4574_bbe4_9edd9509e0f1.slice/crio-80809f989abfd81ca75e05148dff0e94247e51b2d8e005a5faeacd7b35322304 WatchSource:0}: Error finding container 80809f989abfd81ca75e05148dff0e94247e51b2d8e005a5faeacd7b35322304: Status 404 returned error can't find the container with id 80809f989abfd81ca75e05148dff0e94247e51b2d8e005a5faeacd7b35322304 Jan 27 13:30:42 crc kubenswrapper[4786]: I0127 13:30:42.405877 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:30:42 crc kubenswrapper[4786]: W0127 13:30:42.412476 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod701d1213_1604_4347_989c_98f88c361f06.slice/crio-ee83a809e742eaa5af3d2bfa6d48bd7f885e5501b52e20e3c75a370945ff704d WatchSource:0}: Error finding container ee83a809e742eaa5af3d2bfa6d48bd7f885e5501b52e20e3c75a370945ff704d: Status 404 returned error can't find the container with id ee83a809e742eaa5af3d2bfa6d48bd7f885e5501b52e20e3c75a370945ff704d Jan 27 13:30:42 crc kubenswrapper[4786]: I0127 13:30:42.509360 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4537cb47-5182-4574-bbe4-9edd9509e0f1","Type":"ContainerStarted","Data":"0d9b44fda9c1496d9313d6057d76ac8ca39fa42e074d21359c894a47da5099f3"} Jan 27 13:30:42 crc kubenswrapper[4786]: I0127 13:30:42.509411 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4537cb47-5182-4574-bbe4-9edd9509e0f1","Type":"ContainerStarted","Data":"80809f989abfd81ca75e05148dff0e94247e51b2d8e005a5faeacd7b35322304"} Jan 27 13:30:42 crc kubenswrapper[4786]: I0127 13:30:42.517762 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"701d1213-1604-4347-989c-98f88c361f06","Type":"ContainerStarted","Data":"ee83a809e742eaa5af3d2bfa6d48bd7f885e5501b52e20e3c75a370945ff704d"} Jan 27 13:30:43 crc kubenswrapper[4786]: I0127 13:30:43.475954 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ede085-aa04-4e76-a2b8-4b5a2fc0693a" path="/var/lib/kubelet/pods/51ede085-aa04-4e76-a2b8-4b5a2fc0693a/volumes" Jan 27 13:30:43 crc kubenswrapper[4786]: I0127 13:30:43.476827 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba" path="/var/lib/kubelet/pods/5ddeddc7-cdfc-4a14-a27c-c55ad2bb3cba/volumes" Jan 27 13:30:43 crc kubenswrapper[4786]: I0127 13:30:43.547413 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4537cb47-5182-4574-bbe4-9edd9509e0f1","Type":"ContainerStarted","Data":"eb7411b1313a93f59fb8e199a0312ac5410432a977b02983ca5d247f155bf918"} Jan 27 13:30:43 crc kubenswrapper[4786]: I0127 13:30:43.549126 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"701d1213-1604-4347-989c-98f88c361f06","Type":"ContainerStarted","Data":"c71820b87588674a4174fdfffbf504e50348107b3351d25b5ae13306ac5ea846"} Jan 27 13:30:43 crc kubenswrapper[4786]: I0127 13:30:43.551097 4786 generic.go:334] "Generic (PLEG): container finished" podID="969b8667-4255-4a18-b097-c9c84b9432e9" containerID="678035256acf3f8452b2af8fad52934d809c945085f0fb4d9a1cfb6e6a47a6fc" exitCode=0 Jan 27 13:30:43 crc kubenswrapper[4786]: I0127 13:30:43.551126 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"969b8667-4255-4a18-b097-c9c84b9432e9","Type":"ContainerDied","Data":"678035256acf3f8452b2af8fad52934d809c945085f0fb4d9a1cfb6e6a47a6fc"} Jan 27 13:30:43 crc kubenswrapper[4786]: I0127 13:30:43.566556 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.566533946 podStartE2EDuration="2.566533946s" podCreationTimestamp="2026-01-27 13:30:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:30:43.564487571 +0000 UTC m=+1426.775101700" watchObservedRunningTime="2026-01-27 13:30:43.566533946 +0000 UTC m=+1426.777148065" Jan 27 13:30:43 crc kubenswrapper[4786]: I0127 13:30:43.998132 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.094509 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/969b8667-4255-4a18-b097-c9c84b9432e9-config-data\") pod \"969b8667-4255-4a18-b097-c9c84b9432e9\" (UID: \"969b8667-4255-4a18-b097-c9c84b9432e9\") " Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.094593 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sbvs\" (UniqueName: \"kubernetes.io/projected/969b8667-4255-4a18-b097-c9c84b9432e9-kube-api-access-5sbvs\") pod \"969b8667-4255-4a18-b097-c9c84b9432e9\" (UID: \"969b8667-4255-4a18-b097-c9c84b9432e9\") " Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.099223 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/969b8667-4255-4a18-b097-c9c84b9432e9-kube-api-access-5sbvs" (OuterVolumeSpecName: "kube-api-access-5sbvs") pod "969b8667-4255-4a18-b097-c9c84b9432e9" (UID: "969b8667-4255-4a18-b097-c9c84b9432e9"). InnerVolumeSpecName "kube-api-access-5sbvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.115557 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/969b8667-4255-4a18-b097-c9c84b9432e9-config-data" (OuterVolumeSpecName: "config-data") pod "969b8667-4255-4a18-b097-c9c84b9432e9" (UID: "969b8667-4255-4a18-b097-c9c84b9432e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.196557 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/969b8667-4255-4a18-b097-c9c84b9432e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.196625 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sbvs\" (UniqueName: \"kubernetes.io/projected/969b8667-4255-4a18-b097-c9c84b9432e9-kube-api-access-5sbvs\") on node \"crc\" DevicePath \"\"" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.559954 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.559951 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"969b8667-4255-4a18-b097-c9c84b9432e9","Type":"ContainerDied","Data":"3cfaa1b25f7de11e3c245c80b8338ca1709593efc92fc7a72732c403342d5ab3"} Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.560355 4786 scope.go:117] "RemoveContainer" containerID="678035256acf3f8452b2af8fad52934d809c945085f0fb4d9a1cfb6e6a47a6fc" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.562432 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"701d1213-1604-4347-989c-98f88c361f06","Type":"ContainerStarted","Data":"1788d372258353cb8d6de86fa631004f04f7e95eda7ee4fd5b7ecf8bb3bb6ef3"} Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.591201 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=3.591141033 podStartE2EDuration="3.591141033s" podCreationTimestamp="2026-01-27 13:30:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:30:44.579476354 +0000 UTC m=+1427.790090483" watchObservedRunningTime="2026-01-27 13:30:44.591141033 +0000 UTC m=+1427.801755152" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.608188 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.617138 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.624091 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:30:44 crc kubenswrapper[4786]: E0127 13:30:44.624514 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="969b8667-4255-4a18-b097-c9c84b9432e9" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.624533 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="969b8667-4255-4a18-b097-c9c84b9432e9" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.624717 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="969b8667-4255-4a18-b097-c9c84b9432e9" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.625305 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.630528 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.632045 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.705330 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlfq\" (UniqueName: \"kubernetes.io/projected/71ce96de-64df-4cfa-a871-380544ed87d2-kube-api-access-7rlfq\") pod \"nova-kuttl-scheduler-0\" (UID: \"71ce96de-64df-4cfa-a871-380544ed87d2\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.705512 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ce96de-64df-4cfa-a871-380544ed87d2-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"71ce96de-64df-4cfa-a871-380544ed87d2\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.807513 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ce96de-64df-4cfa-a871-380544ed87d2-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"71ce96de-64df-4cfa-a871-380544ed87d2\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.807581 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rlfq\" (UniqueName: \"kubernetes.io/projected/71ce96de-64df-4cfa-a871-380544ed87d2-kube-api-access-7rlfq\") pod \"nova-kuttl-scheduler-0\" (UID: \"71ce96de-64df-4cfa-a871-380544ed87d2\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.820168 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ce96de-64df-4cfa-a871-380544ed87d2-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"71ce96de-64df-4cfa-a871-380544ed87d2\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.828290 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rlfq\" (UniqueName: \"kubernetes.io/projected/71ce96de-64df-4cfa-a871-380544ed87d2-kube-api-access-7rlfq\") pod \"nova-kuttl-scheduler-0\" (UID: \"71ce96de-64df-4cfa-a871-380544ed87d2\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:44 crc kubenswrapper[4786]: I0127 13:30:44.949056 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:45 crc kubenswrapper[4786]: I0127 13:30:45.359158 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:30:45 crc kubenswrapper[4786]: W0127 13:30:45.365916 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71ce96de_64df_4cfa_a871_380544ed87d2.slice/crio-e274845ff34680e00bc9e51649b063fdb7425ea6d6f679ee935fe436bc786784 WatchSource:0}: Error finding container e274845ff34680e00bc9e51649b063fdb7425ea6d6f679ee935fe436bc786784: Status 404 returned error can't find the container with id e274845ff34680e00bc9e51649b063fdb7425ea6d6f679ee935fe436bc786784 Jan 27 13:30:45 crc kubenswrapper[4786]: I0127 13:30:45.477522 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="969b8667-4255-4a18-b097-c9c84b9432e9" path="/var/lib/kubelet/pods/969b8667-4255-4a18-b097-c9c84b9432e9/volumes" Jan 27 13:30:45 crc kubenswrapper[4786]: I0127 13:30:45.569958 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"71ce96de-64df-4cfa-a871-380544ed87d2","Type":"ContainerStarted","Data":"e274845ff34680e00bc9e51649b063fdb7425ea6d6f679ee935fe436bc786784"} Jan 27 13:30:46 crc kubenswrapper[4786]: I0127 13:30:46.584088 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"71ce96de-64df-4cfa-a871-380544ed87d2","Type":"ContainerStarted","Data":"e02f04eb115ae0b715b6adc44ce77d01655d3d4f256a219b968212af9aa219e3"} Jan 27 13:30:46 crc kubenswrapper[4786]: I0127 13:30:46.600136 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.600114615 podStartE2EDuration="2.600114615s" podCreationTimestamp="2026-01-27 13:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:30:46.596800934 +0000 UTC m=+1429.807415063" watchObservedRunningTime="2026-01-27 13:30:46.600114615 +0000 UTC m=+1429.810728734" Jan 27 13:30:46 crc kubenswrapper[4786]: I0127 13:30:46.891128 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:46 crc kubenswrapper[4786]: I0127 13:30:46.891434 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:49 crc kubenswrapper[4786]: I0127 13:30:49.950552 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:51 crc kubenswrapper[4786]: I0127 13:30:51.890791 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:51 crc kubenswrapper[4786]: I0127 13:30:51.891109 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:30:51 crc kubenswrapper[4786]: I0127 13:30:51.946843 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:51 crc kubenswrapper[4786]: I0127 13:30:51.946922 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:30:52 crc kubenswrapper[4786]: I0127 13:30:52.972805 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4537cb47-5182-4574-bbe4-9edd9509e0f1" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.138:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:30:52 crc kubenswrapper[4786]: I0127 13:30:52.973138 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4537cb47-5182-4574-bbe4-9edd9509e0f1" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.138:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:30:53 crc kubenswrapper[4786]: I0127 13:30:53.055830 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="701d1213-1604-4347-989c-98f88c361f06" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.139:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:30:53 crc kubenswrapper[4786]: I0127 13:30:53.056099 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="701d1213-1604-4347-989c-98f88c361f06" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.139:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:30:54 crc kubenswrapper[4786]: I0127 13:30:54.949784 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:54 crc kubenswrapper[4786]: I0127 13:30:54.974315 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:30:55 crc kubenswrapper[4786]: I0127 13:30:55.673723 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:31:01 crc kubenswrapper[4786]: I0127 13:31:01.893007 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:31:01 crc kubenswrapper[4786]: I0127 13:31:01.893649 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:31:01 crc kubenswrapper[4786]: I0127 13:31:01.896017 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:31:01 crc kubenswrapper[4786]: I0127 13:31:01.896162 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:31:01 crc kubenswrapper[4786]: I0127 13:31:01.954673 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:31:01 crc kubenswrapper[4786]: I0127 13:31:01.955570 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:31:01 crc kubenswrapper[4786]: I0127 13:31:01.956827 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:31:01 crc kubenswrapper[4786]: I0127 13:31:01.965956 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:31:02 crc kubenswrapper[4786]: I0127 13:31:02.704633 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:31:02 crc kubenswrapper[4786]: I0127 13:31:02.709916 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:32:31 crc kubenswrapper[4786]: I0127 13:32:31.366941 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pn27n"] Jan 27 13:32:31 crc kubenswrapper[4786]: I0127 13:32:31.369244 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pn27n" Jan 27 13:32:31 crc kubenswrapper[4786]: I0127 13:32:31.384006 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pn27n"] Jan 27 13:32:31 crc kubenswrapper[4786]: I0127 13:32:31.467332 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b483450a-cb02-4df9-8e5a-640c6b21f731-catalog-content\") pod \"certified-operators-pn27n\" (UID: \"b483450a-cb02-4df9-8e5a-640c6b21f731\") " pod="openshift-marketplace/certified-operators-pn27n" Jan 27 13:32:31 crc kubenswrapper[4786]: I0127 13:32:31.467453 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b483450a-cb02-4df9-8e5a-640c6b21f731-utilities\") pod \"certified-operators-pn27n\" (UID: \"b483450a-cb02-4df9-8e5a-640c6b21f731\") " pod="openshift-marketplace/certified-operators-pn27n" Jan 27 13:32:31 crc kubenswrapper[4786]: I0127 13:32:31.467533 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7r55\" (UniqueName: \"kubernetes.io/projected/b483450a-cb02-4df9-8e5a-640c6b21f731-kube-api-access-t7r55\") pod \"certified-operators-pn27n\" (UID: \"b483450a-cb02-4df9-8e5a-640c6b21f731\") " pod="openshift-marketplace/certified-operators-pn27n" Jan 27 13:32:31 crc kubenswrapper[4786]: I0127 13:32:31.569153 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b483450a-cb02-4df9-8e5a-640c6b21f731-catalog-content\") pod \"certified-operators-pn27n\" (UID: \"b483450a-cb02-4df9-8e5a-640c6b21f731\") " pod="openshift-marketplace/certified-operators-pn27n" Jan 27 13:32:31 crc kubenswrapper[4786]: I0127 13:32:31.569218 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b483450a-cb02-4df9-8e5a-640c6b21f731-utilities\") pod \"certified-operators-pn27n\" (UID: \"b483450a-cb02-4df9-8e5a-640c6b21f731\") " pod="openshift-marketplace/certified-operators-pn27n" Jan 27 13:32:31 crc kubenswrapper[4786]: I0127 13:32:31.569246 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7r55\" (UniqueName: \"kubernetes.io/projected/b483450a-cb02-4df9-8e5a-640c6b21f731-kube-api-access-t7r55\") pod \"certified-operators-pn27n\" (UID: \"b483450a-cb02-4df9-8e5a-640c6b21f731\") " pod="openshift-marketplace/certified-operators-pn27n" Jan 27 13:32:31 crc kubenswrapper[4786]: I0127 13:32:31.569860 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b483450a-cb02-4df9-8e5a-640c6b21f731-utilities\") pod \"certified-operators-pn27n\" (UID: \"b483450a-cb02-4df9-8e5a-640c6b21f731\") " pod="openshift-marketplace/certified-operators-pn27n" Jan 27 13:32:31 crc kubenswrapper[4786]: I0127 13:32:31.569922 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b483450a-cb02-4df9-8e5a-640c6b21f731-catalog-content\") pod \"certified-operators-pn27n\" (UID: \"b483450a-cb02-4df9-8e5a-640c6b21f731\") " pod="openshift-marketplace/certified-operators-pn27n" Jan 27 13:32:31 crc kubenswrapper[4786]: I0127 13:32:31.592235 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7r55\" (UniqueName: \"kubernetes.io/projected/b483450a-cb02-4df9-8e5a-640c6b21f731-kube-api-access-t7r55\") pod \"certified-operators-pn27n\" (UID: \"b483450a-cb02-4df9-8e5a-640c6b21f731\") " pod="openshift-marketplace/certified-operators-pn27n" Jan 27 13:32:31 crc kubenswrapper[4786]: I0127 13:32:31.699193 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pn27n" Jan 27 13:32:32 crc kubenswrapper[4786]: I0127 13:32:32.227593 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pn27n"] Jan 27 13:32:32 crc kubenswrapper[4786]: I0127 13:32:32.439764 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pn27n" event={"ID":"b483450a-cb02-4df9-8e5a-640c6b21f731","Type":"ContainerStarted","Data":"2b78c2e1a19a142889ab1f41c3b8731d63a5e491a492ba9e8568dd0b8391e32c"} Jan 27 13:32:32 crc kubenswrapper[4786]: I0127 13:32:32.440104 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pn27n" event={"ID":"b483450a-cb02-4df9-8e5a-640c6b21f731","Type":"ContainerStarted","Data":"ab07e9905808fb6c020299f0a542e0abd57f487bf4545ede2f9613c85686e9fa"} Jan 27 13:32:33 crc kubenswrapper[4786]: I0127 13:32:33.452349 4786 generic.go:334] "Generic (PLEG): container finished" podID="b483450a-cb02-4df9-8e5a-640c6b21f731" containerID="2b78c2e1a19a142889ab1f41c3b8731d63a5e491a492ba9e8568dd0b8391e32c" exitCode=0 Jan 27 13:32:33 crc kubenswrapper[4786]: I0127 13:32:33.452448 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pn27n" event={"ID":"b483450a-cb02-4df9-8e5a-640c6b21f731","Type":"ContainerDied","Data":"2b78c2e1a19a142889ab1f41c3b8731d63a5e491a492ba9e8568dd0b8391e32c"} Jan 27 13:32:35 crc kubenswrapper[4786]: I0127 13:32:35.635745 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8qfdq"] Jan 27 13:32:35 crc kubenswrapper[4786]: I0127 13:32:35.641861 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:35 crc kubenswrapper[4786]: I0127 13:32:35.653159 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27qht\" (UniqueName: \"kubernetes.io/projected/ae866e62-55d9-4435-b58c-31de1563720c-kube-api-access-27qht\") pod \"redhat-marketplace-8qfdq\" (UID: \"ae866e62-55d9-4435-b58c-31de1563720c\") " pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:35 crc kubenswrapper[4786]: I0127 13:32:35.653245 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae866e62-55d9-4435-b58c-31de1563720c-utilities\") pod \"redhat-marketplace-8qfdq\" (UID: \"ae866e62-55d9-4435-b58c-31de1563720c\") " pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:35 crc kubenswrapper[4786]: I0127 13:32:35.653390 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae866e62-55d9-4435-b58c-31de1563720c-catalog-content\") pod \"redhat-marketplace-8qfdq\" (UID: \"ae866e62-55d9-4435-b58c-31de1563720c\") " pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:35 crc kubenswrapper[4786]: I0127 13:32:35.660706 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8qfdq"] Jan 27 13:32:35 crc kubenswrapper[4786]: I0127 13:32:35.754185 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae866e62-55d9-4435-b58c-31de1563720c-catalog-content\") pod \"redhat-marketplace-8qfdq\" (UID: \"ae866e62-55d9-4435-b58c-31de1563720c\") " pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:35 crc kubenswrapper[4786]: I0127 13:32:35.754246 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27qht\" (UniqueName: \"kubernetes.io/projected/ae866e62-55d9-4435-b58c-31de1563720c-kube-api-access-27qht\") pod \"redhat-marketplace-8qfdq\" (UID: \"ae866e62-55d9-4435-b58c-31de1563720c\") " pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:35 crc kubenswrapper[4786]: I0127 13:32:35.754466 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae866e62-55d9-4435-b58c-31de1563720c-utilities\") pod \"redhat-marketplace-8qfdq\" (UID: \"ae866e62-55d9-4435-b58c-31de1563720c\") " pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:35 crc kubenswrapper[4786]: I0127 13:32:35.754840 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae866e62-55d9-4435-b58c-31de1563720c-catalog-content\") pod \"redhat-marketplace-8qfdq\" (UID: \"ae866e62-55d9-4435-b58c-31de1563720c\") " pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:35 crc kubenswrapper[4786]: I0127 13:32:35.754976 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae866e62-55d9-4435-b58c-31de1563720c-utilities\") pod \"redhat-marketplace-8qfdq\" (UID: \"ae866e62-55d9-4435-b58c-31de1563720c\") " pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:35 crc kubenswrapper[4786]: I0127 13:32:35.773453 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27qht\" (UniqueName: \"kubernetes.io/projected/ae866e62-55d9-4435-b58c-31de1563720c-kube-api-access-27qht\") pod \"redhat-marketplace-8qfdq\" (UID: \"ae866e62-55d9-4435-b58c-31de1563720c\") " pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:35 crc kubenswrapper[4786]: I0127 13:32:35.980749 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:36 crc kubenswrapper[4786]: I0127 13:32:36.881377 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8qfdq"] Jan 27 13:32:37 crc kubenswrapper[4786]: I0127 13:32:37.488004 4786 generic.go:334] "Generic (PLEG): container finished" podID="b483450a-cb02-4df9-8e5a-640c6b21f731" containerID="96ed53c956b159df8840d5aff4f9e5f58e0514619834be05538e88c84c997bbb" exitCode=0 Jan 27 13:32:37 crc kubenswrapper[4786]: I0127 13:32:37.488136 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pn27n" event={"ID":"b483450a-cb02-4df9-8e5a-640c6b21f731","Type":"ContainerDied","Data":"96ed53c956b159df8840d5aff4f9e5f58e0514619834be05538e88c84c997bbb"} Jan 27 13:32:37 crc kubenswrapper[4786]: I0127 13:32:37.489961 4786 generic.go:334] "Generic (PLEG): container finished" podID="ae866e62-55d9-4435-b58c-31de1563720c" containerID="0e490dc565cb02a6d075bd246cdbbf86ec7c04ed34b4f0783787082fe55fa846" exitCode=0 Jan 27 13:32:37 crc kubenswrapper[4786]: I0127 13:32:37.489997 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qfdq" event={"ID":"ae866e62-55d9-4435-b58c-31de1563720c","Type":"ContainerDied","Data":"0e490dc565cb02a6d075bd246cdbbf86ec7c04ed34b4f0783787082fe55fa846"} Jan 27 13:32:37 crc kubenswrapper[4786]: I0127 13:32:37.490026 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qfdq" event={"ID":"ae866e62-55d9-4435-b58c-31de1563720c","Type":"ContainerStarted","Data":"c2422ec5850afeb0fae928a9e062818228839a3fa6f8aa20dea8acd2e3633e12"} Jan 27 13:32:38 crc kubenswrapper[4786]: I0127 13:32:38.499284 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pn27n" event={"ID":"b483450a-cb02-4df9-8e5a-640c6b21f731","Type":"ContainerStarted","Data":"96086d027fbe77283f56b07761ce040c6397832c9cd1a2f16fa23e889fe7d493"} Jan 27 13:32:38 crc kubenswrapper[4786]: I0127 13:32:38.524122 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pn27n" podStartSLOduration=2.019357505 podStartE2EDuration="7.524100616s" podCreationTimestamp="2026-01-27 13:32:31 +0000 UTC" firstStartedPulling="2026-01-27 13:32:32.441099855 +0000 UTC m=+1535.651713974" lastFinishedPulling="2026-01-27 13:32:37.945842966 +0000 UTC m=+1541.156457085" observedRunningTime="2026-01-27 13:32:38.51584875 +0000 UTC m=+1541.726462869" watchObservedRunningTime="2026-01-27 13:32:38.524100616 +0000 UTC m=+1541.734714735" Jan 27 13:32:39 crc kubenswrapper[4786]: I0127 13:32:39.509624 4786 generic.go:334] "Generic (PLEG): container finished" podID="ae866e62-55d9-4435-b58c-31de1563720c" containerID="c4b9ac31c0ec01d2600568e89e802a4bbe081dd203c07ed60f2c5b611a14292e" exitCode=0 Jan 27 13:32:39 crc kubenswrapper[4786]: I0127 13:32:39.509718 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qfdq" event={"ID":"ae866e62-55d9-4435-b58c-31de1563720c","Type":"ContainerDied","Data":"c4b9ac31c0ec01d2600568e89e802a4bbe081dd203c07ed60f2c5b611a14292e"} Jan 27 13:32:39 crc kubenswrapper[4786]: I0127 13:32:39.532342 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:32:39 crc kubenswrapper[4786]: I0127 13:32:39.532630 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:32:41 crc kubenswrapper[4786]: I0127 13:32:41.525950 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qfdq" event={"ID":"ae866e62-55d9-4435-b58c-31de1563720c","Type":"ContainerStarted","Data":"ae7034e3a8c7159f3c143b0cc511c31a469508925e4458d418818ce732ad4256"} Jan 27 13:32:41 crc kubenswrapper[4786]: I0127 13:32:41.549784 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8qfdq" podStartSLOduration=3.04748839 podStartE2EDuration="6.549763928s" podCreationTimestamp="2026-01-27 13:32:35 +0000 UTC" firstStartedPulling="2026-01-27 13:32:37.49185554 +0000 UTC m=+1540.702469659" lastFinishedPulling="2026-01-27 13:32:40.994131078 +0000 UTC m=+1544.204745197" observedRunningTime="2026-01-27 13:32:41.54325583 +0000 UTC m=+1544.753869959" watchObservedRunningTime="2026-01-27 13:32:41.549763928 +0000 UTC m=+1544.760378037" Jan 27 13:32:41 crc kubenswrapper[4786]: I0127 13:32:41.700076 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pn27n" Jan 27 13:32:41 crc kubenswrapper[4786]: I0127 13:32:41.700137 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pn27n" Jan 27 13:32:41 crc kubenswrapper[4786]: I0127 13:32:41.748120 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pn27n" Jan 27 13:32:45 crc kubenswrapper[4786]: I0127 13:32:45.021159 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-486kx"] Jan 27 13:32:45 crc kubenswrapper[4786]: I0127 13:32:45.023857 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:45 crc kubenswrapper[4786]: I0127 13:32:45.033446 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-486kx"] Jan 27 13:32:45 crc kubenswrapper[4786]: I0127 13:32:45.196060 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b57d4e6a-134f-4153-b393-b3a831a78679-utilities\") pod \"community-operators-486kx\" (UID: \"b57d4e6a-134f-4153-b393-b3a831a78679\") " pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:45 crc kubenswrapper[4786]: I0127 13:32:45.196165 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b57d4e6a-134f-4153-b393-b3a831a78679-catalog-content\") pod \"community-operators-486kx\" (UID: \"b57d4e6a-134f-4153-b393-b3a831a78679\") " pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:45 crc kubenswrapper[4786]: I0127 13:32:45.196199 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7qvw\" (UniqueName: \"kubernetes.io/projected/b57d4e6a-134f-4153-b393-b3a831a78679-kube-api-access-t7qvw\") pod \"community-operators-486kx\" (UID: \"b57d4e6a-134f-4153-b393-b3a831a78679\") " pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:45 crc kubenswrapper[4786]: I0127 13:32:45.297265 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b57d4e6a-134f-4153-b393-b3a831a78679-utilities\") pod \"community-operators-486kx\" (UID: \"b57d4e6a-134f-4153-b393-b3a831a78679\") " pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:45 crc kubenswrapper[4786]: I0127 13:32:45.297356 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b57d4e6a-134f-4153-b393-b3a831a78679-catalog-content\") pod \"community-operators-486kx\" (UID: \"b57d4e6a-134f-4153-b393-b3a831a78679\") " pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:45 crc kubenswrapper[4786]: I0127 13:32:45.297375 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7qvw\" (UniqueName: \"kubernetes.io/projected/b57d4e6a-134f-4153-b393-b3a831a78679-kube-api-access-t7qvw\") pod \"community-operators-486kx\" (UID: \"b57d4e6a-134f-4153-b393-b3a831a78679\") " pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:45 crc kubenswrapper[4786]: I0127 13:32:45.298118 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b57d4e6a-134f-4153-b393-b3a831a78679-utilities\") pod \"community-operators-486kx\" (UID: \"b57d4e6a-134f-4153-b393-b3a831a78679\") " pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:45 crc kubenswrapper[4786]: I0127 13:32:45.298347 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b57d4e6a-134f-4153-b393-b3a831a78679-catalog-content\") pod \"community-operators-486kx\" (UID: \"b57d4e6a-134f-4153-b393-b3a831a78679\") " pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:45 crc kubenswrapper[4786]: I0127 13:32:45.325266 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7qvw\" (UniqueName: \"kubernetes.io/projected/b57d4e6a-134f-4153-b393-b3a831a78679-kube-api-access-t7qvw\") pod \"community-operators-486kx\" (UID: \"b57d4e6a-134f-4153-b393-b3a831a78679\") " pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:45 crc kubenswrapper[4786]: I0127 13:32:45.350848 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:45 crc kubenswrapper[4786]: W0127 13:32:45.931958 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb57d4e6a_134f_4153_b393_b3a831a78679.slice/crio-68601930f249c6b16819e08404d50ac7aecd2e18cc447ef46c1b1a913e778675 WatchSource:0}: Error finding container 68601930f249c6b16819e08404d50ac7aecd2e18cc447ef46c1b1a913e778675: Status 404 returned error can't find the container with id 68601930f249c6b16819e08404d50ac7aecd2e18cc447ef46c1b1a913e778675 Jan 27 13:32:45 crc kubenswrapper[4786]: I0127 13:32:45.933503 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-486kx"] Jan 27 13:32:45 crc kubenswrapper[4786]: I0127 13:32:45.982053 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:45 crc kubenswrapper[4786]: I0127 13:32:45.983103 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:46 crc kubenswrapper[4786]: I0127 13:32:46.569936 4786 generic.go:334] "Generic (PLEG): container finished" podID="b57d4e6a-134f-4153-b393-b3a831a78679" containerID="709608a8490e76440d56005184a2b28e23447ae8363a61c4b86b95e6837ff80a" exitCode=0 Jan 27 13:32:46 crc kubenswrapper[4786]: I0127 13:32:46.570028 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-486kx" event={"ID":"b57d4e6a-134f-4153-b393-b3a831a78679","Type":"ContainerDied","Data":"709608a8490e76440d56005184a2b28e23447ae8363a61c4b86b95e6837ff80a"} Jan 27 13:32:46 crc kubenswrapper[4786]: I0127 13:32:46.570333 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-486kx" event={"ID":"b57d4e6a-134f-4153-b393-b3a831a78679","Type":"ContainerStarted","Data":"68601930f249c6b16819e08404d50ac7aecd2e18cc447ef46c1b1a913e778675"} Jan 27 13:32:47 crc kubenswrapper[4786]: I0127 13:32:47.022522 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8qfdq" podUID="ae866e62-55d9-4435-b58c-31de1563720c" containerName="registry-server" probeResult="failure" output=< Jan 27 13:32:47 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Jan 27 13:32:47 crc kubenswrapper[4786]: > Jan 27 13:32:47 crc kubenswrapper[4786]: I0127 13:32:47.580908 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-486kx" event={"ID":"b57d4e6a-134f-4153-b393-b3a831a78679","Type":"ContainerStarted","Data":"9ad11128f1d513b857936ae6030c1a5c4bca2e188512480896a0ca95e22dd519"} Jan 27 13:32:48 crc kubenswrapper[4786]: I0127 13:32:48.590889 4786 generic.go:334] "Generic (PLEG): container finished" podID="b57d4e6a-134f-4153-b393-b3a831a78679" containerID="9ad11128f1d513b857936ae6030c1a5c4bca2e188512480896a0ca95e22dd519" exitCode=0 Jan 27 13:32:48 crc kubenswrapper[4786]: I0127 13:32:48.590969 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-486kx" event={"ID":"b57d4e6a-134f-4153-b393-b3a831a78679","Type":"ContainerDied","Data":"9ad11128f1d513b857936ae6030c1a5c4bca2e188512480896a0ca95e22dd519"} Jan 27 13:32:49 crc kubenswrapper[4786]: I0127 13:32:49.601269 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-486kx" event={"ID":"b57d4e6a-134f-4153-b393-b3a831a78679","Type":"ContainerStarted","Data":"86e4000feca5e6f9f8f869e63a85f1ae6a6e4beccdc145babeac0c2163990b0c"} Jan 27 13:32:49 crc kubenswrapper[4786]: I0127 13:32:49.619810 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-486kx" podStartSLOduration=2.216154318 podStartE2EDuration="4.619792141s" podCreationTimestamp="2026-01-27 13:32:45 +0000 UTC" firstStartedPulling="2026-01-27 13:32:46.571626682 +0000 UTC m=+1549.782240801" lastFinishedPulling="2026-01-27 13:32:48.975264505 +0000 UTC m=+1552.185878624" observedRunningTime="2026-01-27 13:32:49.617357474 +0000 UTC m=+1552.827971603" watchObservedRunningTime="2026-01-27 13:32:49.619792141 +0000 UTC m=+1552.830406260" Jan 27 13:32:51 crc kubenswrapper[4786]: I0127 13:32:51.744071 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pn27n" Jan 27 13:32:52 crc kubenswrapper[4786]: I0127 13:32:52.047021 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pn27n"] Jan 27 13:32:52 crc kubenswrapper[4786]: I0127 13:32:52.218287 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mn8gs"] Jan 27 13:32:52 crc kubenswrapper[4786]: I0127 13:32:52.219023 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mn8gs" podUID="641b2493-f39b-4215-b189-73893b0e03a4" containerName="registry-server" containerID="cri-o://ea9b4a1871b32cbcdb121c42b00de91b3f51e46e9b4b7c2959120119dd04f74d" gracePeriod=2 Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:52.636804 4786 generic.go:334] "Generic (PLEG): container finished" podID="641b2493-f39b-4215-b189-73893b0e03a4" containerID="ea9b4a1871b32cbcdb121c42b00de91b3f51e46e9b4b7c2959120119dd04f74d" exitCode=0 Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:52.637129 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn8gs" event={"ID":"641b2493-f39b-4215-b189-73893b0e03a4","Type":"ContainerDied","Data":"ea9b4a1871b32cbcdb121c42b00de91b3f51e46e9b4b7c2959120119dd04f74d"} Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:52.758254 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:52.914086 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkxvm\" (UniqueName: \"kubernetes.io/projected/641b2493-f39b-4215-b189-73893b0e03a4-kube-api-access-pkxvm\") pod \"641b2493-f39b-4215-b189-73893b0e03a4\" (UID: \"641b2493-f39b-4215-b189-73893b0e03a4\") " Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:52.914186 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/641b2493-f39b-4215-b189-73893b0e03a4-catalog-content\") pod \"641b2493-f39b-4215-b189-73893b0e03a4\" (UID: \"641b2493-f39b-4215-b189-73893b0e03a4\") " Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:52.914265 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/641b2493-f39b-4215-b189-73893b0e03a4-utilities\") pod \"641b2493-f39b-4215-b189-73893b0e03a4\" (UID: \"641b2493-f39b-4215-b189-73893b0e03a4\") " Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:52.914817 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/641b2493-f39b-4215-b189-73893b0e03a4-utilities" (OuterVolumeSpecName: "utilities") pod "641b2493-f39b-4215-b189-73893b0e03a4" (UID: "641b2493-f39b-4215-b189-73893b0e03a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:52.932811 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/641b2493-f39b-4215-b189-73893b0e03a4-kube-api-access-pkxvm" (OuterVolumeSpecName: "kube-api-access-pkxvm") pod "641b2493-f39b-4215-b189-73893b0e03a4" (UID: "641b2493-f39b-4215-b189-73893b0e03a4"). InnerVolumeSpecName "kube-api-access-pkxvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:52.972556 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/641b2493-f39b-4215-b189-73893b0e03a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "641b2493-f39b-4215-b189-73893b0e03a4" (UID: "641b2493-f39b-4215-b189-73893b0e03a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:53.016737 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/641b2493-f39b-4215-b189-73893b0e03a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:53.016776 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkxvm\" (UniqueName: \"kubernetes.io/projected/641b2493-f39b-4215-b189-73893b0e03a4-kube-api-access-pkxvm\") on node \"crc\" DevicePath \"\"" Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:53.016788 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/641b2493-f39b-4215-b189-73893b0e03a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:53.648922 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mn8gs" event={"ID":"641b2493-f39b-4215-b189-73893b0e03a4","Type":"ContainerDied","Data":"6edfd0d72d3bf3e3ba5ec8600b886de84426e9738b351270f1e806aa52a663fa"} Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:53.648981 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mn8gs" Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:53.648977 4786 scope.go:117] "RemoveContainer" containerID="ea9b4a1871b32cbcdb121c42b00de91b3f51e46e9b4b7c2959120119dd04f74d" Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:53.678025 4786 scope.go:117] "RemoveContainer" containerID="fa914eb71fbd54771df447a56b68d2bf1830313051984dfed2e9225e1817de7d" Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:53.678137 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mn8gs"] Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:53.689333 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mn8gs"] Jan 27 13:32:53 crc kubenswrapper[4786]: I0127 13:32:53.698927 4786 scope.go:117] "RemoveContainer" containerID="06731271ae6d9fce73f5097614d9f548fa031fc1e69df2cc2c7632b479093b7c" Jan 27 13:32:55 crc kubenswrapper[4786]: I0127 13:32:55.351403 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:55 crc kubenswrapper[4786]: I0127 13:32:55.351456 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:55 crc kubenswrapper[4786]: I0127 13:32:55.395173 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:55 crc kubenswrapper[4786]: I0127 13:32:55.475155 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="641b2493-f39b-4215-b189-73893b0e03a4" path="/var/lib/kubelet/pods/641b2493-f39b-4215-b189-73893b0e03a4/volumes" Jan 27 13:32:55 crc kubenswrapper[4786]: I0127 13:32:55.703261 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:56 crc kubenswrapper[4786]: I0127 13:32:56.030350 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:56 crc kubenswrapper[4786]: I0127 13:32:56.074915 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:57 crc kubenswrapper[4786]: I0127 13:32:57.810838 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-486kx"] Jan 27 13:32:57 crc kubenswrapper[4786]: I0127 13:32:57.811056 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-486kx" podUID="b57d4e6a-134f-4153-b393-b3a831a78679" containerName="registry-server" containerID="cri-o://86e4000feca5e6f9f8f869e63a85f1ae6a6e4beccdc145babeac0c2163990b0c" gracePeriod=2 Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.176996 4786 scope.go:117] "RemoveContainer" containerID="4ede8f703d3d70513781218f5a068930f520894b7584266350e0555b4c4d0ddc" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.397454 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.502193 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b57d4e6a-134f-4153-b393-b3a831a78679-catalog-content\") pod \"b57d4e6a-134f-4153-b393-b3a831a78679\" (UID: \"b57d4e6a-134f-4153-b393-b3a831a78679\") " Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.502276 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7qvw\" (UniqueName: \"kubernetes.io/projected/b57d4e6a-134f-4153-b393-b3a831a78679-kube-api-access-t7qvw\") pod \"b57d4e6a-134f-4153-b393-b3a831a78679\" (UID: \"b57d4e6a-134f-4153-b393-b3a831a78679\") " Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.502327 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b57d4e6a-134f-4153-b393-b3a831a78679-utilities\") pod \"b57d4e6a-134f-4153-b393-b3a831a78679\" (UID: \"b57d4e6a-134f-4153-b393-b3a831a78679\") " Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.503489 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b57d4e6a-134f-4153-b393-b3a831a78679-utilities" (OuterVolumeSpecName: "utilities") pod "b57d4e6a-134f-4153-b393-b3a831a78679" (UID: "b57d4e6a-134f-4153-b393-b3a831a78679"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.510816 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b57d4e6a-134f-4153-b393-b3a831a78679-kube-api-access-t7qvw" (OuterVolumeSpecName: "kube-api-access-t7qvw") pod "b57d4e6a-134f-4153-b393-b3a831a78679" (UID: "b57d4e6a-134f-4153-b393-b3a831a78679"). InnerVolumeSpecName "kube-api-access-t7qvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.545420 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b57d4e6a-134f-4153-b393-b3a831a78679-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b57d4e6a-134f-4153-b393-b3a831a78679" (UID: "b57d4e6a-134f-4153-b393-b3a831a78679"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.604100 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b57d4e6a-134f-4153-b393-b3a831a78679-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.604421 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7qvw\" (UniqueName: \"kubernetes.io/projected/b57d4e6a-134f-4153-b393-b3a831a78679-kube-api-access-t7qvw\") on node \"crc\" DevicePath \"\"" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.604437 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b57d4e6a-134f-4153-b393-b3a831a78679-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.691271 4786 generic.go:334] "Generic (PLEG): container finished" podID="b57d4e6a-134f-4153-b393-b3a831a78679" containerID="86e4000feca5e6f9f8f869e63a85f1ae6a6e4beccdc145babeac0c2163990b0c" exitCode=0 Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.691330 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-486kx" event={"ID":"b57d4e6a-134f-4153-b393-b3a831a78679","Type":"ContainerDied","Data":"86e4000feca5e6f9f8f869e63a85f1ae6a6e4beccdc145babeac0c2163990b0c"} Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.691361 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-486kx" event={"ID":"b57d4e6a-134f-4153-b393-b3a831a78679","Type":"ContainerDied","Data":"68601930f249c6b16819e08404d50ac7aecd2e18cc447ef46c1b1a913e778675"} Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.691392 4786 scope.go:117] "RemoveContainer" containerID="86e4000feca5e6f9f8f869e63a85f1ae6a6e4beccdc145babeac0c2163990b0c" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.691521 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-486kx" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.711727 4786 scope.go:117] "RemoveContainer" containerID="9ad11128f1d513b857936ae6030c1a5c4bca2e188512480896a0ca95e22dd519" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.727886 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-486kx"] Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.735102 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-486kx"] Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.753079 4786 scope.go:117] "RemoveContainer" containerID="709608a8490e76440d56005184a2b28e23447ae8363a61c4b86b95e6837ff80a" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.773064 4786 scope.go:117] "RemoveContainer" containerID="86e4000feca5e6f9f8f869e63a85f1ae6a6e4beccdc145babeac0c2163990b0c" Jan 27 13:32:58 crc kubenswrapper[4786]: E0127 13:32:58.774845 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e4000feca5e6f9f8f869e63a85f1ae6a6e4beccdc145babeac0c2163990b0c\": container with ID starting with 86e4000feca5e6f9f8f869e63a85f1ae6a6e4beccdc145babeac0c2163990b0c not found: ID does not exist" containerID="86e4000feca5e6f9f8f869e63a85f1ae6a6e4beccdc145babeac0c2163990b0c" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.774894 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e4000feca5e6f9f8f869e63a85f1ae6a6e4beccdc145babeac0c2163990b0c"} err="failed to get container status \"86e4000feca5e6f9f8f869e63a85f1ae6a6e4beccdc145babeac0c2163990b0c\": rpc error: code = NotFound desc = could not find container \"86e4000feca5e6f9f8f869e63a85f1ae6a6e4beccdc145babeac0c2163990b0c\": container with ID starting with 86e4000feca5e6f9f8f869e63a85f1ae6a6e4beccdc145babeac0c2163990b0c not found: ID does not exist" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.774930 4786 scope.go:117] "RemoveContainer" containerID="9ad11128f1d513b857936ae6030c1a5c4bca2e188512480896a0ca95e22dd519" Jan 27 13:32:58 crc kubenswrapper[4786]: E0127 13:32:58.775258 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ad11128f1d513b857936ae6030c1a5c4bca2e188512480896a0ca95e22dd519\": container with ID starting with 9ad11128f1d513b857936ae6030c1a5c4bca2e188512480896a0ca95e22dd519 not found: ID does not exist" containerID="9ad11128f1d513b857936ae6030c1a5c4bca2e188512480896a0ca95e22dd519" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.776026 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ad11128f1d513b857936ae6030c1a5c4bca2e188512480896a0ca95e22dd519"} err="failed to get container status \"9ad11128f1d513b857936ae6030c1a5c4bca2e188512480896a0ca95e22dd519\": rpc error: code = NotFound desc = could not find container \"9ad11128f1d513b857936ae6030c1a5c4bca2e188512480896a0ca95e22dd519\": container with ID starting with 9ad11128f1d513b857936ae6030c1a5c4bca2e188512480896a0ca95e22dd519 not found: ID does not exist" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.776054 4786 scope.go:117] "RemoveContainer" containerID="709608a8490e76440d56005184a2b28e23447ae8363a61c4b86b95e6837ff80a" Jan 27 13:32:58 crc kubenswrapper[4786]: E0127 13:32:58.776477 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"709608a8490e76440d56005184a2b28e23447ae8363a61c4b86b95e6837ff80a\": container with ID starting with 709608a8490e76440d56005184a2b28e23447ae8363a61c4b86b95e6837ff80a not found: ID does not exist" containerID="709608a8490e76440d56005184a2b28e23447ae8363a61c4b86b95e6837ff80a" Jan 27 13:32:58 crc kubenswrapper[4786]: I0127 13:32:58.776513 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"709608a8490e76440d56005184a2b28e23447ae8363a61c4b86b95e6837ff80a"} err="failed to get container status \"709608a8490e76440d56005184a2b28e23447ae8363a61c4b86b95e6837ff80a\": rpc error: code = NotFound desc = could not find container \"709608a8490e76440d56005184a2b28e23447ae8363a61c4b86b95e6837ff80a\": container with ID starting with 709608a8490e76440d56005184a2b28e23447ae8363a61c4b86b95e6837ff80a not found: ID does not exist" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.212447 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8qfdq"] Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.212713 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8qfdq" podUID="ae866e62-55d9-4435-b58c-31de1563720c" containerName="registry-server" containerID="cri-o://ae7034e3a8c7159f3c143b0cc511c31a469508925e4458d418818ce732ad4256" gracePeriod=2 Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.479933 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b57d4e6a-134f-4153-b393-b3a831a78679" path="/var/lib/kubelet/pods/b57d4e6a-134f-4153-b393-b3a831a78679/volumes" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.668333 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.724924 4786 generic.go:334] "Generic (PLEG): container finished" podID="ae866e62-55d9-4435-b58c-31de1563720c" containerID="ae7034e3a8c7159f3c143b0cc511c31a469508925e4458d418818ce732ad4256" exitCode=0 Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.724972 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qfdq" event={"ID":"ae866e62-55d9-4435-b58c-31de1563720c","Type":"ContainerDied","Data":"ae7034e3a8c7159f3c143b0cc511c31a469508925e4458d418818ce732ad4256"} Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.725004 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qfdq" event={"ID":"ae866e62-55d9-4435-b58c-31de1563720c","Type":"ContainerDied","Data":"c2422ec5850afeb0fae928a9e062818228839a3fa6f8aa20dea8acd2e3633e12"} Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.725026 4786 scope.go:117] "RemoveContainer" containerID="ae7034e3a8c7159f3c143b0cc511c31a469508925e4458d418818ce732ad4256" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.725123 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8qfdq" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.770825 4786 scope.go:117] "RemoveContainer" containerID="c4b9ac31c0ec01d2600568e89e802a4bbe081dd203c07ed60f2c5b611a14292e" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.794372 4786 scope.go:117] "RemoveContainer" containerID="0e490dc565cb02a6d075bd246cdbbf86ec7c04ed34b4f0783787082fe55fa846" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.822081 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae866e62-55d9-4435-b58c-31de1563720c-catalog-content\") pod \"ae866e62-55d9-4435-b58c-31de1563720c\" (UID: \"ae866e62-55d9-4435-b58c-31de1563720c\") " Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.822164 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27qht\" (UniqueName: \"kubernetes.io/projected/ae866e62-55d9-4435-b58c-31de1563720c-kube-api-access-27qht\") pod \"ae866e62-55d9-4435-b58c-31de1563720c\" (UID: \"ae866e62-55d9-4435-b58c-31de1563720c\") " Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.822247 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae866e62-55d9-4435-b58c-31de1563720c-utilities\") pod \"ae866e62-55d9-4435-b58c-31de1563720c\" (UID: \"ae866e62-55d9-4435-b58c-31de1563720c\") " Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.822880 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae866e62-55d9-4435-b58c-31de1563720c-utilities" (OuterVolumeSpecName: "utilities") pod "ae866e62-55d9-4435-b58c-31de1563720c" (UID: "ae866e62-55d9-4435-b58c-31de1563720c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.824286 4786 scope.go:117] "RemoveContainer" containerID="ae7034e3a8c7159f3c143b0cc511c31a469508925e4458d418818ce732ad4256" Jan 27 13:32:59 crc kubenswrapper[4786]: E0127 13:32:59.824832 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae7034e3a8c7159f3c143b0cc511c31a469508925e4458d418818ce732ad4256\": container with ID starting with ae7034e3a8c7159f3c143b0cc511c31a469508925e4458d418818ce732ad4256 not found: ID does not exist" containerID="ae7034e3a8c7159f3c143b0cc511c31a469508925e4458d418818ce732ad4256" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.824904 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae7034e3a8c7159f3c143b0cc511c31a469508925e4458d418818ce732ad4256"} err="failed to get container status \"ae7034e3a8c7159f3c143b0cc511c31a469508925e4458d418818ce732ad4256\": rpc error: code = NotFound desc = could not find container \"ae7034e3a8c7159f3c143b0cc511c31a469508925e4458d418818ce732ad4256\": container with ID starting with ae7034e3a8c7159f3c143b0cc511c31a469508925e4458d418818ce732ad4256 not found: ID does not exist" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.824932 4786 scope.go:117] "RemoveContainer" containerID="c4b9ac31c0ec01d2600568e89e802a4bbe081dd203c07ed60f2c5b611a14292e" Jan 27 13:32:59 crc kubenswrapper[4786]: E0127 13:32:59.825255 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4b9ac31c0ec01d2600568e89e802a4bbe081dd203c07ed60f2c5b611a14292e\": container with ID starting with c4b9ac31c0ec01d2600568e89e802a4bbe081dd203c07ed60f2c5b611a14292e not found: ID does not exist" containerID="c4b9ac31c0ec01d2600568e89e802a4bbe081dd203c07ed60f2c5b611a14292e" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.825280 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4b9ac31c0ec01d2600568e89e802a4bbe081dd203c07ed60f2c5b611a14292e"} err="failed to get container status \"c4b9ac31c0ec01d2600568e89e802a4bbe081dd203c07ed60f2c5b611a14292e\": rpc error: code = NotFound desc = could not find container \"c4b9ac31c0ec01d2600568e89e802a4bbe081dd203c07ed60f2c5b611a14292e\": container with ID starting with c4b9ac31c0ec01d2600568e89e802a4bbe081dd203c07ed60f2c5b611a14292e not found: ID does not exist" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.825294 4786 scope.go:117] "RemoveContainer" containerID="0e490dc565cb02a6d075bd246cdbbf86ec7c04ed34b4f0783787082fe55fa846" Jan 27 13:32:59 crc kubenswrapper[4786]: E0127 13:32:59.825856 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e490dc565cb02a6d075bd246cdbbf86ec7c04ed34b4f0783787082fe55fa846\": container with ID starting with 0e490dc565cb02a6d075bd246cdbbf86ec7c04ed34b4f0783787082fe55fa846 not found: ID does not exist" containerID="0e490dc565cb02a6d075bd246cdbbf86ec7c04ed34b4f0783787082fe55fa846" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.825894 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e490dc565cb02a6d075bd246cdbbf86ec7c04ed34b4f0783787082fe55fa846"} err="failed to get container status \"0e490dc565cb02a6d075bd246cdbbf86ec7c04ed34b4f0783787082fe55fa846\": rpc error: code = NotFound desc = could not find container \"0e490dc565cb02a6d075bd246cdbbf86ec7c04ed34b4f0783787082fe55fa846\": container with ID starting with 0e490dc565cb02a6d075bd246cdbbf86ec7c04ed34b4f0783787082fe55fa846 not found: ID does not exist" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.828016 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae866e62-55d9-4435-b58c-31de1563720c-kube-api-access-27qht" (OuterVolumeSpecName: "kube-api-access-27qht") pod "ae866e62-55d9-4435-b58c-31de1563720c" (UID: "ae866e62-55d9-4435-b58c-31de1563720c"). InnerVolumeSpecName "kube-api-access-27qht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.842899 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae866e62-55d9-4435-b58c-31de1563720c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae866e62-55d9-4435-b58c-31de1563720c" (UID: "ae866e62-55d9-4435-b58c-31de1563720c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.924283 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae866e62-55d9-4435-b58c-31de1563720c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.924319 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae866e62-55d9-4435-b58c-31de1563720c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:32:59 crc kubenswrapper[4786]: I0127 13:32:59.924330 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27qht\" (UniqueName: \"kubernetes.io/projected/ae866e62-55d9-4435-b58c-31de1563720c-kube-api-access-27qht\") on node \"crc\" DevicePath \"\"" Jan 27 13:33:00 crc kubenswrapper[4786]: I0127 13:33:00.057220 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8qfdq"] Jan 27 13:33:00 crc kubenswrapper[4786]: I0127 13:33:00.067086 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8qfdq"] Jan 27 13:33:01 crc kubenswrapper[4786]: I0127 13:33:01.474566 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae866e62-55d9-4435-b58c-31de1563720c" path="/var/lib/kubelet/pods/ae866e62-55d9-4435-b58c-31de1563720c/volumes" Jan 27 13:33:09 crc kubenswrapper[4786]: I0127 13:33:09.533184 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:33:09 crc kubenswrapper[4786]: I0127 13:33:09.533559 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:33:39 crc kubenswrapper[4786]: I0127 13:33:39.532591 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:33:39 crc kubenswrapper[4786]: I0127 13:33:39.533152 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:33:39 crc kubenswrapper[4786]: I0127 13:33:39.533207 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:33:39 crc kubenswrapper[4786]: I0127 13:33:39.533897 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906"} pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:33:39 crc kubenswrapper[4786]: I0127 13:33:39.533952 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" containerID="cri-o://2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" gracePeriod=600 Jan 27 13:33:39 crc kubenswrapper[4786]: E0127 13:33:39.704841 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:33:40 crc kubenswrapper[4786]: I0127 13:33:40.050463 4786 generic.go:334] "Generic (PLEG): container finished" podID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" exitCode=0 Jan 27 13:33:40 crc kubenswrapper[4786]: I0127 13:33:40.050515 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerDied","Data":"2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906"} Jan 27 13:33:40 crc kubenswrapper[4786]: I0127 13:33:40.050566 4786 scope.go:117] "RemoveContainer" containerID="f3ed072ebc16e765b7905a1ff330ce7f66cb1fdc6d65470e94dc155e6c8bb633" Jan 27 13:33:40 crc kubenswrapper[4786]: I0127 13:33:40.051169 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:33:40 crc kubenswrapper[4786]: E0127 13:33:40.051478 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:33:54 crc kubenswrapper[4786]: I0127 13:33:54.465489 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:33:54 crc kubenswrapper[4786]: E0127 13:33:54.467232 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:33:58 crc kubenswrapper[4786]: I0127 13:33:58.318651 4786 scope.go:117] "RemoveContainer" containerID="ce4f479279a8aa6ac8901f0dde6e54524a6cb6d896f8ad35e5a0df41d4a69e2b" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.230594 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.241503 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-b9z5s"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.250353 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.266112 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-4nv65"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.280326 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapidded-account-delete-fxxdr"] Jan 27 13:34:04 crc kubenswrapper[4786]: E0127 13:34:04.280803 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="641b2493-f39b-4215-b189-73893b0e03a4" containerName="registry-server" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.280829 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="641b2493-f39b-4215-b189-73893b0e03a4" containerName="registry-server" Jan 27 13:34:04 crc kubenswrapper[4786]: E0127 13:34:04.280850 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57d4e6a-134f-4153-b393-b3a831a78679" containerName="extract-content" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.280859 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57d4e6a-134f-4153-b393-b3a831a78679" containerName="extract-content" Jan 27 13:34:04 crc kubenswrapper[4786]: E0127 13:34:04.280870 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57d4e6a-134f-4153-b393-b3a831a78679" containerName="extract-utilities" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.280878 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57d4e6a-134f-4153-b393-b3a831a78679" containerName="extract-utilities" Jan 27 13:34:04 crc kubenswrapper[4786]: E0127 13:34:04.280891 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57d4e6a-134f-4153-b393-b3a831a78679" containerName="registry-server" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.280899 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57d4e6a-134f-4153-b393-b3a831a78679" containerName="registry-server" Jan 27 13:34:04 crc kubenswrapper[4786]: E0127 13:34:04.280911 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae866e62-55d9-4435-b58c-31de1563720c" containerName="extract-content" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.280919 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae866e62-55d9-4435-b58c-31de1563720c" containerName="extract-content" Jan 27 13:34:04 crc kubenswrapper[4786]: E0127 13:34:04.280941 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae866e62-55d9-4435-b58c-31de1563720c" containerName="registry-server" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.280949 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae866e62-55d9-4435-b58c-31de1563720c" containerName="registry-server" Jan 27 13:34:04 crc kubenswrapper[4786]: E0127 13:34:04.280966 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="641b2493-f39b-4215-b189-73893b0e03a4" containerName="extract-utilities" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.280974 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="641b2493-f39b-4215-b189-73893b0e03a4" containerName="extract-utilities" Jan 27 13:34:04 crc kubenswrapper[4786]: E0127 13:34:04.280993 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="641b2493-f39b-4215-b189-73893b0e03a4" containerName="extract-content" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.281002 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="641b2493-f39b-4215-b189-73893b0e03a4" containerName="extract-content" Jan 27 13:34:04 crc kubenswrapper[4786]: E0127 13:34:04.281014 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae866e62-55d9-4435-b58c-31de1563720c" containerName="extract-utilities" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.281021 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae866e62-55d9-4435-b58c-31de1563720c" containerName="extract-utilities" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.281200 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57d4e6a-134f-4153-b393-b3a831a78679" containerName="registry-server" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.281223 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="641b2493-f39b-4215-b189-73893b0e03a4" containerName="registry-server" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.281235 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae866e62-55d9-4435-b58c-31de1563720c" containerName="registry-server" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.281944 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapidded-account-delete-fxxdr" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.295643 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapidded-account-delete-fxxdr"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.309035 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff214ed6-67b7-4cc7-a5e2-145f27edb08e-operator-scripts\") pod \"novaapidded-account-delete-fxxdr\" (UID: \"ff214ed6-67b7-4cc7-a5e2-145f27edb08e\") " pod="nova-kuttl-default/novaapidded-account-delete-fxxdr" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.309125 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8qhh\" (UniqueName: \"kubernetes.io/projected/ff214ed6-67b7-4cc7-a5e2-145f27edb08e-kube-api-access-g8qhh\") pod \"novaapidded-account-delete-fxxdr\" (UID: \"ff214ed6-67b7-4cc7-a5e2-145f27edb08e\") " pod="nova-kuttl-default/novaapidded-account-delete-fxxdr" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.348011 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell155b0-account-delete-fmhhg"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.349293 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell155b0-account-delete-fmhhg" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.362383 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell155b0-account-delete-fmhhg"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.410830 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff214ed6-67b7-4cc7-a5e2-145f27edb08e-operator-scripts\") pod \"novaapidded-account-delete-fxxdr\" (UID: \"ff214ed6-67b7-4cc7-a5e2-145f27edb08e\") " pod="nova-kuttl-default/novaapidded-account-delete-fxxdr" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.411206 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4gtp\" (UniqueName: \"kubernetes.io/projected/d7586855-a8f2-4069-8104-78da30bffbb3-kube-api-access-w4gtp\") pod \"novacell155b0-account-delete-fmhhg\" (UID: \"d7586855-a8f2-4069-8104-78da30bffbb3\") " pod="nova-kuttl-default/novacell155b0-account-delete-fmhhg" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.411304 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8qhh\" (UniqueName: \"kubernetes.io/projected/ff214ed6-67b7-4cc7-a5e2-145f27edb08e-kube-api-access-g8qhh\") pod \"novaapidded-account-delete-fxxdr\" (UID: \"ff214ed6-67b7-4cc7-a5e2-145f27edb08e\") " pod="nova-kuttl-default/novaapidded-account-delete-fxxdr" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.411418 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7586855-a8f2-4069-8104-78da30bffbb3-operator-scripts\") pod \"novacell155b0-account-delete-fmhhg\" (UID: \"d7586855-a8f2-4069-8104-78da30bffbb3\") " pod="nova-kuttl-default/novacell155b0-account-delete-fmhhg" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.411774 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff214ed6-67b7-4cc7-a5e2-145f27edb08e-operator-scripts\") pod \"novaapidded-account-delete-fxxdr\" (UID: \"ff214ed6-67b7-4cc7-a5e2-145f27edb08e\") " pod="nova-kuttl-default/novaapidded-account-delete-fxxdr" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.429785 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell00ba4-account-delete-n49zr"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.431136 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell00ba4-account-delete-n49zr" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.444011 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8qhh\" (UniqueName: \"kubernetes.io/projected/ff214ed6-67b7-4cc7-a5e2-145f27edb08e-kube-api-access-g8qhh\") pod \"novaapidded-account-delete-fxxdr\" (UID: \"ff214ed6-67b7-4cc7-a5e2-145f27edb08e\") " pod="nova-kuttl-default/novaapidded-account-delete-fxxdr" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.482927 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.483261 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4537cb47-5182-4574-bbe4-9edd9509e0f1" containerName="nova-kuttl-metadata-log" containerID="cri-o://0d9b44fda9c1496d9313d6057d76ac8ca39fa42e074d21359c894a47da5099f3" gracePeriod=30 Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.483574 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4537cb47-5182-4574-bbe4-9edd9509e0f1" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://eb7411b1313a93f59fb8e199a0312ac5410432a977b02983ca5d247f155bf918" gracePeriod=30 Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.518887 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4gtp\" (UniqueName: \"kubernetes.io/projected/d7586855-a8f2-4069-8104-78da30bffbb3-kube-api-access-w4gtp\") pod \"novacell155b0-account-delete-fmhhg\" (UID: \"d7586855-a8f2-4069-8104-78da30bffbb3\") " pod="nova-kuttl-default/novacell155b0-account-delete-fmhhg" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.518981 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjntv\" (UniqueName: \"kubernetes.io/projected/fd5ae447-7421-4cae-9768-29425499d2ec-kube-api-access-fjntv\") pod \"novacell00ba4-account-delete-n49zr\" (UID: \"fd5ae447-7421-4cae-9768-29425499d2ec\") " pod="nova-kuttl-default/novacell00ba4-account-delete-n49zr" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.519154 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7586855-a8f2-4069-8104-78da30bffbb3-operator-scripts\") pod \"novacell155b0-account-delete-fmhhg\" (UID: \"d7586855-a8f2-4069-8104-78da30bffbb3\") " pod="nova-kuttl-default/novacell155b0-account-delete-fmhhg" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.519214 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5ae447-7421-4cae-9768-29425499d2ec-operator-scripts\") pod \"novacell00ba4-account-delete-n49zr\" (UID: \"fd5ae447-7421-4cae-9768-29425499d2ec\") " pod="nova-kuttl-default/novacell00ba4-account-delete-n49zr" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.520829 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7586855-a8f2-4069-8104-78da30bffbb3-operator-scripts\") pod \"novacell155b0-account-delete-fmhhg\" (UID: \"d7586855-a8f2-4069-8104-78da30bffbb3\") " pod="nova-kuttl-default/novacell155b0-account-delete-fmhhg" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.544449 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell00ba4-account-delete-n49zr"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.575725 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.575987 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="71ce96de-64df-4cfa-a871-380544ed87d2" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://e02f04eb115ae0b715b6adc44ce77d01655d3d4f256a219b968212af9aa219e3" gracePeriod=30 Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.578588 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4gtp\" (UniqueName: \"kubernetes.io/projected/d7586855-a8f2-4069-8104-78da30bffbb3-kube-api-access-w4gtp\") pod \"novacell155b0-account-delete-fmhhg\" (UID: \"d7586855-a8f2-4069-8104-78da30bffbb3\") " pod="nova-kuttl-default/novacell155b0-account-delete-fmhhg" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.594061 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.594399 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="701d1213-1604-4347-989c-98f88c361f06" containerName="nova-kuttl-api-log" containerID="cri-o://c71820b87588674a4174fdfffbf504e50348107b3351d25b5ae13306ac5ea846" gracePeriod=30 Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.594716 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="701d1213-1604-4347-989c-98f88c361f06" containerName="nova-kuttl-api-api" containerID="cri-o://1788d372258353cb8d6de86fa631004f04f7e95eda7ee4fd5b7ecf8bb3bb6ef3" gracePeriod=30 Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.613182 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapidded-account-delete-fxxdr" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.615448 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.615761 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="8f166195-a387-40f9-b587-3e8e5c3afcf9" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://360d2cc0c05c30a9fdead1af09fd2365e02cdecf4bfb9c61ddadbdf1153020c8" gracePeriod=30 Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.626354 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5ae447-7421-4cae-9768-29425499d2ec-operator-scripts\") pod \"novacell00ba4-account-delete-n49zr\" (UID: \"fd5ae447-7421-4cae-9768-29425499d2ec\") " pod="nova-kuttl-default/novacell00ba4-account-delete-n49zr" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.626552 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjntv\" (UniqueName: \"kubernetes.io/projected/fd5ae447-7421-4cae-9768-29425499d2ec-kube-api-access-fjntv\") pod \"novacell00ba4-account-delete-n49zr\" (UID: \"fd5ae447-7421-4cae-9768-29425499d2ec\") " pod="nova-kuttl-default/novacell00ba4-account-delete-n49zr" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.627710 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5ae447-7421-4cae-9768-29425499d2ec-operator-scripts\") pod \"novacell00ba4-account-delete-n49zr\" (UID: \"fd5ae447-7421-4cae-9768-29425499d2ec\") " pod="nova-kuttl-default/novacell00ba4-account-delete-n49zr" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.639519 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.639950 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="02cab41f-e7f7-411d-a37e-4406f5c7bdc0" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://308beb7d926b8e84a736bac265b78e833053e6cef7f55c94ea731bc385fa1734" gracePeriod=30 Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.656258 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjntv\" (UniqueName: \"kubernetes.io/projected/fd5ae447-7421-4cae-9768-29425499d2ec-kube-api-access-fjntv\") pod \"novacell00ba4-account-delete-n49zr\" (UID: \"fd5ae447-7421-4cae-9768-29425499d2ec\") " pod="nova-kuttl-default/novacell00ba4-account-delete-n49zr" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.656401 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.661860 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-fwsr4"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.676973 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell155b0-account-delete-fmhhg" Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.706200 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.706437 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="78658d70-0256-4b13-9804-88758f0e33e5" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://9d3b271f28d55e1ab3872a515757f751be0a2b7246f4719493c6e43cbf53057c" gracePeriod=30 Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.717989 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.726006 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-d54j5"] Jan 27 13:34:04 crc kubenswrapper[4786]: I0127 13:34:04.813795 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell00ba4-account-delete-n49zr" Jan 27 13:34:04 crc kubenswrapper[4786]: E0127 13:34:04.954563 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e02f04eb115ae0b715b6adc44ce77d01655d3d4f256a219b968212af9aa219e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:34:04 crc kubenswrapper[4786]: E0127 13:34:04.958172 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e02f04eb115ae0b715b6adc44ce77d01655d3d4f256a219b968212af9aa219e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:34:04 crc kubenswrapper[4786]: E0127 13:34:04.961685 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e02f04eb115ae0b715b6adc44ce77d01655d3d4f256a219b968212af9aa219e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:34:04 crc kubenswrapper[4786]: E0127 13:34:04.961731 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="71ce96de-64df-4cfa-a871-380544ed87d2" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.268246 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapidded-account-delete-fxxdr"] Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.273878 4786 generic.go:334] "Generic (PLEG): container finished" podID="701d1213-1604-4347-989c-98f88c361f06" containerID="c71820b87588674a4174fdfffbf504e50348107b3351d25b5ae13306ac5ea846" exitCode=143 Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.273948 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"701d1213-1604-4347-989c-98f88c361f06","Type":"ContainerDied","Data":"c71820b87588674a4174fdfffbf504e50348107b3351d25b5ae13306ac5ea846"} Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.283050 4786 generic.go:334] "Generic (PLEG): container finished" podID="8f166195-a387-40f9-b587-3e8e5c3afcf9" containerID="360d2cc0c05c30a9fdead1af09fd2365e02cdecf4bfb9c61ddadbdf1153020c8" exitCode=0 Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.283193 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"8f166195-a387-40f9-b587-3e8e5c3afcf9","Type":"ContainerDied","Data":"360d2cc0c05c30a9fdead1af09fd2365e02cdecf4bfb9c61ddadbdf1153020c8"} Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.296588 4786 generic.go:334] "Generic (PLEG): container finished" podID="4537cb47-5182-4574-bbe4-9edd9509e0f1" containerID="0d9b44fda9c1496d9313d6057d76ac8ca39fa42e074d21359c894a47da5099f3" exitCode=143 Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.296737 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4537cb47-5182-4574-bbe4-9edd9509e0f1","Type":"ContainerDied","Data":"0d9b44fda9c1496d9313d6057d76ac8ca39fa42e074d21359c894a47da5099f3"} Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.420051 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell155b0-account-delete-fmhhg"] Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.459990 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell00ba4-account-delete-n49zr"] Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.485908 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08dce0df-922f-4538-abaf-ec64509f246f" path="/var/lib/kubelet/pods/08dce0df-922f-4538-abaf-ec64509f246f/volumes" Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.486573 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d22bccf-b474-465b-8a31-d5b965a5448a" path="/var/lib/kubelet/pods/2d22bccf-b474-465b-8a31-d5b965a5448a/volumes" Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.496974 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a87bac46-e96f-4630-a7a7-4399b8465ffc" path="/var/lib/kubelet/pods/a87bac46-e96f-4630-a7a7-4399b8465ffc/volumes" Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.497700 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc35b06a-c49a-45a3-8bac-626ad8256ad2" path="/var/lib/kubelet/pods/fc35b06a-c49a-45a3-8bac-626ad8256ad2/volumes" Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.728290 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.754804 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfzff\" (UniqueName: \"kubernetes.io/projected/8f166195-a387-40f9-b587-3e8e5c3afcf9-kube-api-access-nfzff\") pod \"8f166195-a387-40f9-b587-3e8e5c3afcf9\" (UID: \"8f166195-a387-40f9-b587-3e8e5c3afcf9\") " Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.754965 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f166195-a387-40f9-b587-3e8e5c3afcf9-config-data\") pod \"8f166195-a387-40f9-b587-3e8e5c3afcf9\" (UID: \"8f166195-a387-40f9-b587-3e8e5c3afcf9\") " Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.793385 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f166195-a387-40f9-b587-3e8e5c3afcf9-kube-api-access-nfzff" (OuterVolumeSpecName: "kube-api-access-nfzff") pod "8f166195-a387-40f9-b587-3e8e5c3afcf9" (UID: "8f166195-a387-40f9-b587-3e8e5c3afcf9"). InnerVolumeSpecName "kube-api-access-nfzff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.800722 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f166195-a387-40f9-b587-3e8e5c3afcf9-config-data" (OuterVolumeSpecName: "config-data") pod "8f166195-a387-40f9-b587-3e8e5c3afcf9" (UID: "8f166195-a387-40f9-b587-3e8e5c3afcf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.856895 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfzff\" (UniqueName: \"kubernetes.io/projected/8f166195-a387-40f9-b587-3e8e5c3afcf9-kube-api-access-nfzff\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:05 crc kubenswrapper[4786]: I0127 13:34:05.856933 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f166195-a387-40f9-b587-3e8e5c3afcf9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.309497 4786 generic.go:334] "Generic (PLEG): container finished" podID="ff214ed6-67b7-4cc7-a5e2-145f27edb08e" containerID="54f5da1979232a43b00995472025f54884941588c466902d8d5d23dd25d96a28" exitCode=0 Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.309542 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapidded-account-delete-fxxdr" event={"ID":"ff214ed6-67b7-4cc7-a5e2-145f27edb08e","Type":"ContainerDied","Data":"54f5da1979232a43b00995472025f54884941588c466902d8d5d23dd25d96a28"} Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.309593 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapidded-account-delete-fxxdr" event={"ID":"ff214ed6-67b7-4cc7-a5e2-145f27edb08e","Type":"ContainerStarted","Data":"ee8f96882c246f3cc32cb260d855129a9f307502abee060aa80adef65ef5a829"} Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.311223 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"8f166195-a387-40f9-b587-3e8e5c3afcf9","Type":"ContainerDied","Data":"87266f589ad18e20a0b9f1a17950861bed51d58ebaefcec5c2f6382898661579"} Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.311267 4786 scope.go:117] "RemoveContainer" containerID="360d2cc0c05c30a9fdead1af09fd2365e02cdecf4bfb9c61ddadbdf1153020c8" Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.311357 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.329016 4786 generic.go:334] "Generic (PLEG): container finished" podID="fd5ae447-7421-4cae-9768-29425499d2ec" containerID="549a40290a02d4347f283c02d433598be3a292f571f46e337e9d04a8a383df07" exitCode=0 Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.329098 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell00ba4-account-delete-n49zr" event={"ID":"fd5ae447-7421-4cae-9768-29425499d2ec","Type":"ContainerDied","Data":"549a40290a02d4347f283c02d433598be3a292f571f46e337e9d04a8a383df07"} Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.329128 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell00ba4-account-delete-n49zr" event={"ID":"fd5ae447-7421-4cae-9768-29425499d2ec","Type":"ContainerStarted","Data":"8c026c14bcd2cc03c2cab307c48789ea18aff9b32a29f83be81a7a033825bdf4"} Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.336066 4786 generic.go:334] "Generic (PLEG): container finished" podID="02cab41f-e7f7-411d-a37e-4406f5c7bdc0" containerID="308beb7d926b8e84a736bac265b78e833053e6cef7f55c94ea731bc385fa1734" exitCode=0 Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.336125 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"02cab41f-e7f7-411d-a37e-4406f5c7bdc0","Type":"ContainerDied","Data":"308beb7d926b8e84a736bac265b78e833053e6cef7f55c94ea731bc385fa1734"} Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.338058 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.353756 4786 generic.go:334] "Generic (PLEG): container finished" podID="d7586855-a8f2-4069-8104-78da30bffbb3" containerID="726ebba3e24e1b1f548bcd5d15f0128d37425cb3a8dbec8915fa0f08efa951a8" exitCode=0 Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.353836 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell155b0-account-delete-fmhhg" event={"ID":"d7586855-a8f2-4069-8104-78da30bffbb3","Type":"ContainerDied","Data":"726ebba3e24e1b1f548bcd5d15f0128d37425cb3a8dbec8915fa0f08efa951a8"} Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.353871 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell155b0-account-delete-fmhhg" event={"ID":"d7586855-a8f2-4069-8104-78da30bffbb3","Type":"ContainerStarted","Data":"3235ece7587f19b4427c611a85b5dea26254a5a2f6889d14a798ec4207a3159d"} Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.368817 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02cab41f-e7f7-411d-a37e-4406f5c7bdc0-config-data\") pod \"02cab41f-e7f7-411d-a37e-4406f5c7bdc0\" (UID: \"02cab41f-e7f7-411d-a37e-4406f5c7bdc0\") " Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.369199 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh7bg\" (UniqueName: \"kubernetes.io/projected/02cab41f-e7f7-411d-a37e-4406f5c7bdc0-kube-api-access-vh7bg\") pod \"02cab41f-e7f7-411d-a37e-4406f5c7bdc0\" (UID: \"02cab41f-e7f7-411d-a37e-4406f5c7bdc0\") " Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.375044 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02cab41f-e7f7-411d-a37e-4406f5c7bdc0-kube-api-access-vh7bg" (OuterVolumeSpecName: "kube-api-access-vh7bg") pod "02cab41f-e7f7-411d-a37e-4406f5c7bdc0" (UID: "02cab41f-e7f7-411d-a37e-4406f5c7bdc0"). InnerVolumeSpecName "kube-api-access-vh7bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.401406 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.401543 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02cab41f-e7f7-411d-a37e-4406f5c7bdc0-config-data" (OuterVolumeSpecName: "config-data") pod "02cab41f-e7f7-411d-a37e-4406f5c7bdc0" (UID: "02cab41f-e7f7-411d-a37e-4406f5c7bdc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.408397 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.472744 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh7bg\" (UniqueName: \"kubernetes.io/projected/02cab41f-e7f7-411d-a37e-4406f5c7bdc0-kube-api-access-vh7bg\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:06 crc kubenswrapper[4786]: I0127 13:34:06.473014 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02cab41f-e7f7-411d-a37e-4406f5c7bdc0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.384115 4786 generic.go:334] "Generic (PLEG): container finished" podID="78658d70-0256-4b13-9804-88758f0e33e5" containerID="9d3b271f28d55e1ab3872a515757f751be0a2b7246f4719493c6e43cbf53057c" exitCode=0 Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.385401 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"78658d70-0256-4b13-9804-88758f0e33e5","Type":"ContainerDied","Data":"9d3b271f28d55e1ab3872a515757f751be0a2b7246f4719493c6e43cbf53057c"} Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.388132 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.390138 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"02cab41f-e7f7-411d-a37e-4406f5c7bdc0","Type":"ContainerDied","Data":"8dba88b2d6a862e23ae738d35e5ad5f98f12da587d93be41908fea2459215d16"} Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.390329 4786 scope.go:117] "RemoveContainer" containerID="308beb7d926b8e84a736bac265b78e833053e6cef7f55c94ea731bc385fa1734" Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.422314 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.428482 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.471547 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:34:07 crc kubenswrapper[4786]: E0127 13:34:07.471967 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.475588 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02cab41f-e7f7-411d-a37e-4406f5c7bdc0" path="/var/lib/kubelet/pods/02cab41f-e7f7-411d-a37e-4406f5c7bdc0/volumes" Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.476917 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f166195-a387-40f9-b587-3e8e5c3afcf9" path="/var/lib/kubelet/pods/8f166195-a387-40f9-b587-3e8e5c3afcf9/volumes" Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.743323 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapidded-account-delete-fxxdr" Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.801921 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8qhh\" (UniqueName: \"kubernetes.io/projected/ff214ed6-67b7-4cc7-a5e2-145f27edb08e-kube-api-access-g8qhh\") pod \"ff214ed6-67b7-4cc7-a5e2-145f27edb08e\" (UID: \"ff214ed6-67b7-4cc7-a5e2-145f27edb08e\") " Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.802000 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff214ed6-67b7-4cc7-a5e2-145f27edb08e-operator-scripts\") pod \"ff214ed6-67b7-4cc7-a5e2-145f27edb08e\" (UID: \"ff214ed6-67b7-4cc7-a5e2-145f27edb08e\") " Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.802772 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff214ed6-67b7-4cc7-a5e2-145f27edb08e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff214ed6-67b7-4cc7-a5e2-145f27edb08e" (UID: "ff214ed6-67b7-4cc7-a5e2-145f27edb08e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.808034 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff214ed6-67b7-4cc7-a5e2-145f27edb08e-kube-api-access-g8qhh" (OuterVolumeSpecName: "kube-api-access-g8qhh") pod "ff214ed6-67b7-4cc7-a5e2-145f27edb08e" (UID: "ff214ed6-67b7-4cc7-a5e2-145f27edb08e"). InnerVolumeSpecName "kube-api-access-g8qhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.904237 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8qhh\" (UniqueName: \"kubernetes.io/projected/ff214ed6-67b7-4cc7-a5e2-145f27edb08e-kube-api-access-g8qhh\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.904282 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff214ed6-67b7-4cc7-a5e2-145f27edb08e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.926861 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.934164 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell00ba4-account-delete-n49zr" Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.940512 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell155b0-account-delete-fmhhg" Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.996889 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4537cb47-5182-4574-bbe4-9edd9509e0f1" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.138:8775/\": read tcp 10.217.0.2:55724->10.217.0.138:8775: read: connection reset by peer" Jan 27 13:34:07 crc kubenswrapper[4786]: I0127 13:34:07.996926 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4537cb47-5182-4574-bbe4-9edd9509e0f1" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.138:8775/\": read tcp 10.217.0.2:55714->10.217.0.138:8775: read: connection reset by peer" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.005089 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4gtp\" (UniqueName: \"kubernetes.io/projected/d7586855-a8f2-4069-8104-78da30bffbb3-kube-api-access-w4gtp\") pod \"d7586855-a8f2-4069-8104-78da30bffbb3\" (UID: \"d7586855-a8f2-4069-8104-78da30bffbb3\") " Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.006221 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78658d70-0256-4b13-9804-88758f0e33e5-config-data\") pod \"78658d70-0256-4b13-9804-88758f0e33e5\" (UID: \"78658d70-0256-4b13-9804-88758f0e33e5\") " Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.006316 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5ae447-7421-4cae-9768-29425499d2ec-operator-scripts\") pod \"fd5ae447-7421-4cae-9768-29425499d2ec\" (UID: \"fd5ae447-7421-4cae-9768-29425499d2ec\") " Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.006358 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7586855-a8f2-4069-8104-78da30bffbb3-operator-scripts\") pod \"d7586855-a8f2-4069-8104-78da30bffbb3\" (UID: \"d7586855-a8f2-4069-8104-78da30bffbb3\") " Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.006457 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjntv\" (UniqueName: \"kubernetes.io/projected/fd5ae447-7421-4cae-9768-29425499d2ec-kube-api-access-fjntv\") pod \"fd5ae447-7421-4cae-9768-29425499d2ec\" (UID: \"fd5ae447-7421-4cae-9768-29425499d2ec\") " Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.006497 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfmmp\" (UniqueName: \"kubernetes.io/projected/78658d70-0256-4b13-9804-88758f0e33e5-kube-api-access-lfmmp\") pod \"78658d70-0256-4b13-9804-88758f0e33e5\" (UID: \"78658d70-0256-4b13-9804-88758f0e33e5\") " Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.006999 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7586855-a8f2-4069-8104-78da30bffbb3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7586855-a8f2-4069-8104-78da30bffbb3" (UID: "d7586855-a8f2-4069-8104-78da30bffbb3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.007122 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd5ae447-7421-4cae-9768-29425499d2ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd5ae447-7421-4cae-9768-29425499d2ec" (UID: "fd5ae447-7421-4cae-9768-29425499d2ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.007716 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5ae447-7421-4cae-9768-29425499d2ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.007740 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7586855-a8f2-4069-8104-78da30bffbb3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.009496 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7586855-a8f2-4069-8104-78da30bffbb3-kube-api-access-w4gtp" (OuterVolumeSpecName: "kube-api-access-w4gtp") pod "d7586855-a8f2-4069-8104-78da30bffbb3" (UID: "d7586855-a8f2-4069-8104-78da30bffbb3"). InnerVolumeSpecName "kube-api-access-w4gtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.010155 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78658d70-0256-4b13-9804-88758f0e33e5-kube-api-access-lfmmp" (OuterVolumeSpecName: "kube-api-access-lfmmp") pod "78658d70-0256-4b13-9804-88758f0e33e5" (UID: "78658d70-0256-4b13-9804-88758f0e33e5"). InnerVolumeSpecName "kube-api-access-lfmmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.013867 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd5ae447-7421-4cae-9768-29425499d2ec-kube-api-access-fjntv" (OuterVolumeSpecName: "kube-api-access-fjntv") pod "fd5ae447-7421-4cae-9768-29425499d2ec" (UID: "fd5ae447-7421-4cae-9768-29425499d2ec"). InnerVolumeSpecName "kube-api-access-fjntv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.043575 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78658d70-0256-4b13-9804-88758f0e33e5-config-data" (OuterVolumeSpecName: "config-data") pod "78658d70-0256-4b13-9804-88758f0e33e5" (UID: "78658d70-0256-4b13-9804-88758f0e33e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.109765 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78658d70-0256-4b13-9804-88758f0e33e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.109816 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjntv\" (UniqueName: \"kubernetes.io/projected/fd5ae447-7421-4cae-9768-29425499d2ec-kube-api-access-fjntv\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.109831 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfmmp\" (UniqueName: \"kubernetes.io/projected/78658d70-0256-4b13-9804-88758f0e33e5-kube-api-access-lfmmp\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.109844 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4gtp\" (UniqueName: \"kubernetes.io/projected/d7586855-a8f2-4069-8104-78da30bffbb3-kube-api-access-w4gtp\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.404225 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell155b0-account-delete-fmhhg" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.404234 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell155b0-account-delete-fmhhg" event={"ID":"d7586855-a8f2-4069-8104-78da30bffbb3","Type":"ContainerDied","Data":"3235ece7587f19b4427c611a85b5dea26254a5a2f6889d14a798ec4207a3159d"} Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.404753 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3235ece7587f19b4427c611a85b5dea26254a5a2f6889d14a798ec4207a3159d" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.407317 4786 generic.go:334] "Generic (PLEG): container finished" podID="701d1213-1604-4347-989c-98f88c361f06" containerID="1788d372258353cb8d6de86fa631004f04f7e95eda7ee4fd5b7ecf8bb3bb6ef3" exitCode=0 Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.407414 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"701d1213-1604-4347-989c-98f88c361f06","Type":"ContainerDied","Data":"1788d372258353cb8d6de86fa631004f04f7e95eda7ee4fd5b7ecf8bb3bb6ef3"} Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.407513 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.416237 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapidded-account-delete-fxxdr" event={"ID":"ff214ed6-67b7-4cc7-a5e2-145f27edb08e","Type":"ContainerDied","Data":"ee8f96882c246f3cc32cb260d855129a9f307502abee060aa80adef65ef5a829"} Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.416274 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee8f96882c246f3cc32cb260d855129a9f307502abee060aa80adef65ef5a829" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.416325 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapidded-account-delete-fxxdr" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.420132 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"78658d70-0256-4b13-9804-88758f0e33e5","Type":"ContainerDied","Data":"9c4c201a4702d7a864e7e7d5f531a1d648f26576ceec1b4833c3e6393aee8b8b"} Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.420181 4786 scope.go:117] "RemoveContainer" containerID="9d3b271f28d55e1ab3872a515757f751be0a2b7246f4719493c6e43cbf53057c" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.420292 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.432717 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell00ba4-account-delete-n49zr" event={"ID":"fd5ae447-7421-4cae-9768-29425499d2ec","Type":"ContainerDied","Data":"8c026c14bcd2cc03c2cab307c48789ea18aff9b32a29f83be81a7a033825bdf4"} Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.432763 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c026c14bcd2cc03c2cab307c48789ea18aff9b32a29f83be81a7a033825bdf4" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.432847 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell00ba4-account-delete-n49zr" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.452908 4786 generic.go:334] "Generic (PLEG): container finished" podID="4537cb47-5182-4574-bbe4-9edd9509e0f1" containerID="eb7411b1313a93f59fb8e199a0312ac5410432a977b02983ca5d247f155bf918" exitCode=0 Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.452995 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.453013 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4537cb47-5182-4574-bbe4-9edd9509e0f1","Type":"ContainerDied","Data":"eb7411b1313a93f59fb8e199a0312ac5410432a977b02983ca5d247f155bf918"} Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.458492 4786 scope.go:117] "RemoveContainer" containerID="eb7411b1313a93f59fb8e199a0312ac5410432a977b02983ca5d247f155bf918" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.493961 4786 scope.go:117] "RemoveContainer" containerID="0d9b44fda9c1496d9313d6057d76ac8ca39fa42e074d21359c894a47da5099f3" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.495957 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.511784 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.514992 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4537cb47-5182-4574-bbe4-9edd9509e0f1-logs\") pod \"4537cb47-5182-4574-bbe4-9edd9509e0f1\" (UID: \"4537cb47-5182-4574-bbe4-9edd9509e0f1\") " Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.515093 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4537cb47-5182-4574-bbe4-9edd9509e0f1-config-data\") pod \"4537cb47-5182-4574-bbe4-9edd9509e0f1\" (UID: \"4537cb47-5182-4574-bbe4-9edd9509e0f1\") " Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.515133 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dlhl\" (UniqueName: \"kubernetes.io/projected/4537cb47-5182-4574-bbe4-9edd9509e0f1-kube-api-access-6dlhl\") pod \"4537cb47-5182-4574-bbe4-9edd9509e0f1\" (UID: \"4537cb47-5182-4574-bbe4-9edd9509e0f1\") " Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.515678 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4537cb47-5182-4574-bbe4-9edd9509e0f1-logs" (OuterVolumeSpecName: "logs") pod "4537cb47-5182-4574-bbe4-9edd9509e0f1" (UID: "4537cb47-5182-4574-bbe4-9edd9509e0f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.521592 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4537cb47-5182-4574-bbe4-9edd9509e0f1-kube-api-access-6dlhl" (OuterVolumeSpecName: "kube-api-access-6dlhl") pod "4537cb47-5182-4574-bbe4-9edd9509e0f1" (UID: "4537cb47-5182-4574-bbe4-9edd9509e0f1"). InnerVolumeSpecName "kube-api-access-6dlhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.528683 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.549015 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4537cb47-5182-4574-bbe4-9edd9509e0f1-config-data" (OuterVolumeSpecName: "config-data") pod "4537cb47-5182-4574-bbe4-9edd9509e0f1" (UID: "4537cb47-5182-4574-bbe4-9edd9509e0f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.617051 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9gx5\" (UniqueName: \"kubernetes.io/projected/701d1213-1604-4347-989c-98f88c361f06-kube-api-access-h9gx5\") pod \"701d1213-1604-4347-989c-98f88c361f06\" (UID: \"701d1213-1604-4347-989c-98f88c361f06\") " Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.617114 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/701d1213-1604-4347-989c-98f88c361f06-logs\") pod \"701d1213-1604-4347-989c-98f88c361f06\" (UID: \"701d1213-1604-4347-989c-98f88c361f06\") " Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.617723 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/701d1213-1604-4347-989c-98f88c361f06-config-data\") pod \"701d1213-1604-4347-989c-98f88c361f06\" (UID: \"701d1213-1604-4347-989c-98f88c361f06\") " Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.618128 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/701d1213-1604-4347-989c-98f88c361f06-logs" (OuterVolumeSpecName: "logs") pod "701d1213-1604-4347-989c-98f88c361f06" (UID: "701d1213-1604-4347-989c-98f88c361f06"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.618369 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4537cb47-5182-4574-bbe4-9edd9509e0f1-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.618387 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4537cb47-5182-4574-bbe4-9edd9509e0f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.618396 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dlhl\" (UniqueName: \"kubernetes.io/projected/4537cb47-5182-4574-bbe4-9edd9509e0f1-kube-api-access-6dlhl\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.618424 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/701d1213-1604-4347-989c-98f88c361f06-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.621333 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/701d1213-1604-4347-989c-98f88c361f06-kube-api-access-h9gx5" (OuterVolumeSpecName: "kube-api-access-h9gx5") pod "701d1213-1604-4347-989c-98f88c361f06" (UID: "701d1213-1604-4347-989c-98f88c361f06"). InnerVolumeSpecName "kube-api-access-h9gx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.644712 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/701d1213-1604-4347-989c-98f88c361f06-config-data" (OuterVolumeSpecName: "config-data") pod "701d1213-1604-4347-989c-98f88c361f06" (UID: "701d1213-1604-4347-989c-98f88c361f06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.719837 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9gx5\" (UniqueName: \"kubernetes.io/projected/701d1213-1604-4347-989c-98f88c361f06-kube-api-access-h9gx5\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.719866 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/701d1213-1604-4347-989c-98f88c361f06-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.793695 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:34:08 crc kubenswrapper[4786]: I0127 13:34:08.799683 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.285832 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-gjf4t"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.295294 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-gjf4t"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.323083 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-dded-account-create-update-5nlwb"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.332159 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-dded-account-create-update-5nlwb"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.353690 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapidded-account-delete-fxxdr"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.362465 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapidded-account-delete-fxxdr"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.390307 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-4sc9k"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.402032 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-4sc9k"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.409238 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell155b0-account-delete-fmhhg"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.419022 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.427476 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell155b0-account-delete-fmhhg"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.434887 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-55b0-account-create-update-5rhgz"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.471503 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.492232 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d51f611-5529-49e6-abf6-abb13295f7ee" path="/var/lib/kubelet/pods/2d51f611-5529-49e6-abf6-abb13295f7ee/volumes" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.493251 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3384e88d-d777-49d8-99ad-beef1cd493e3" path="/var/lib/kubelet/pods/3384e88d-d777-49d8-99ad-beef1cd493e3/volumes" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.494191 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4537cb47-5182-4574-bbe4-9edd9509e0f1" path="/var/lib/kubelet/pods/4537cb47-5182-4574-bbe4-9edd9509e0f1/volumes" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.495999 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f" path="/var/lib/kubelet/pods/768f76f5-e1e2-4d1e-b9a0-bd6884fd9d8f/volumes" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.496742 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78658d70-0256-4b13-9804-88758f0e33e5" path="/var/lib/kubelet/pods/78658d70-0256-4b13-9804-88758f0e33e5/volumes" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.497311 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6" path="/var/lib/kubelet/pods/cc56dbcf-69ac-4625-a5ee-1bc5ca4b42b6/volumes" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.497897 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7586855-a8f2-4069-8104-78da30bffbb3" path="/var/lib/kubelet/pods/d7586855-a8f2-4069-8104-78da30bffbb3/volumes" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.500511 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff214ed6-67b7-4cc7-a5e2-145f27edb08e" path="/var/lib/kubelet/pods/ff214ed6-67b7-4cc7-a5e2-145f27edb08e/volumes" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.501536 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-lc8bp"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.506075 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.506083 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"701d1213-1604-4347-989c-98f88c361f06","Type":"ContainerDied","Data":"ee83a809e742eaa5af3d2bfa6d48bd7f885e5501b52e20e3c75a370945ff704d"} Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.506178 4786 scope.go:117] "RemoveContainer" containerID="1788d372258353cb8d6de86fa631004f04f7e95eda7ee4fd5b7ecf8bb3bb6ef3" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.511140 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-lc8bp"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.512055 4786 generic.go:334] "Generic (PLEG): container finished" podID="71ce96de-64df-4cfa-a871-380544ed87d2" containerID="e02f04eb115ae0b715b6adc44ce77d01655d3d4f256a219b968212af9aa219e3" exitCode=0 Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.512100 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"71ce96de-64df-4cfa-a871-380544ed87d2","Type":"ContainerDied","Data":"e02f04eb115ae0b715b6adc44ce77d01655d3d4f256a219b968212af9aa219e3"} Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.512130 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"71ce96de-64df-4cfa-a871-380544ed87d2","Type":"ContainerDied","Data":"e274845ff34680e00bc9e51649b063fdb7425ea6d6f679ee935fe436bc786784"} Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.512187 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.524753 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.530568 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell00ba4-account-delete-n49zr"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.531015 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rlfq\" (UniqueName: \"kubernetes.io/projected/71ce96de-64df-4cfa-a871-380544ed87d2-kube-api-access-7rlfq\") pod \"71ce96de-64df-4cfa-a871-380544ed87d2\" (UID: \"71ce96de-64df-4cfa-a871-380544ed87d2\") " Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.531058 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ce96de-64df-4cfa-a871-380544ed87d2-config-data\") pod \"71ce96de-64df-4cfa-a871-380544ed87d2\" (UID: \"71ce96de-64df-4cfa-a871-380544ed87d2\") " Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.535114 4786 scope.go:117] "RemoveContainer" containerID="c71820b87588674a4174fdfffbf504e50348107b3351d25b5ae13306ac5ea846" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.541695 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-0ba4-account-create-update-6dvg4"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.548783 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ce96de-64df-4cfa-a871-380544ed87d2-kube-api-access-7rlfq" (OuterVolumeSpecName: "kube-api-access-7rlfq") pod "71ce96de-64df-4cfa-a871-380544ed87d2" (UID: "71ce96de-64df-4cfa-a871-380544ed87d2"). InnerVolumeSpecName "kube-api-access-7rlfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.549012 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell00ba4-account-delete-n49zr"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.555448 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.560493 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.561801 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ce96de-64df-4cfa-a871-380544ed87d2-config-data" (OuterVolumeSpecName: "config-data") pod "71ce96de-64df-4cfa-a871-380544ed87d2" (UID: "71ce96de-64df-4cfa-a871-380544ed87d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.600511 4786 scope.go:117] "RemoveContainer" containerID="e02f04eb115ae0b715b6adc44ce77d01655d3d4f256a219b968212af9aa219e3" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.619469 4786 scope.go:117] "RemoveContainer" containerID="e02f04eb115ae0b715b6adc44ce77d01655d3d4f256a219b968212af9aa219e3" Jan 27 13:34:09 crc kubenswrapper[4786]: E0127 13:34:09.619937 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e02f04eb115ae0b715b6adc44ce77d01655d3d4f256a219b968212af9aa219e3\": container with ID starting with e02f04eb115ae0b715b6adc44ce77d01655d3d4f256a219b968212af9aa219e3 not found: ID does not exist" containerID="e02f04eb115ae0b715b6adc44ce77d01655d3d4f256a219b968212af9aa219e3" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.619999 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e02f04eb115ae0b715b6adc44ce77d01655d3d4f256a219b968212af9aa219e3"} err="failed to get container status \"e02f04eb115ae0b715b6adc44ce77d01655d3d4f256a219b968212af9aa219e3\": rpc error: code = NotFound desc = could not find container \"e02f04eb115ae0b715b6adc44ce77d01655d3d4f256a219b968212af9aa219e3\": container with ID starting with e02f04eb115ae0b715b6adc44ce77d01655d3d4f256a219b968212af9aa219e3 not found: ID does not exist" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.634819 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rlfq\" (UniqueName: \"kubernetes.io/projected/71ce96de-64df-4cfa-a871-380544ed87d2-kube-api-access-7rlfq\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.634871 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ce96de-64df-4cfa-a871-380544ed87d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.848900 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:09 crc kubenswrapper[4786]: I0127 13:34:09.862014 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.474099 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="701d1213-1604-4347-989c-98f88c361f06" path="/var/lib/kubelet/pods/701d1213-1604-4347-989c-98f88c361f06/volumes" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.475008 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ce96de-64df-4cfa-a871-380544ed87d2" path="/var/lib/kubelet/pods/71ce96de-64df-4cfa-a871-380544ed87d2/volumes" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.475551 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95cacbbe-65da-4223-a8e7-79f565741d0b" path="/var/lib/kubelet/pods/95cacbbe-65da-4223-a8e7-79f565741d0b/volumes" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.476534 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c026f19d-c330-4429-9886-c0ab82c46ae3" path="/var/lib/kubelet/pods/c026f19d-c330-4429-9886-c0ab82c46ae3/volumes" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.477056 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd5ae447-7421-4cae-9768-29425499d2ec" path="/var/lib/kubelet/pods/fd5ae447-7421-4cae-9768-29425499d2ec/volumes" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.629823 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-q5zbn"] Jan 27 13:34:11 crc kubenswrapper[4786]: E0127 13:34:11.630202 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff214ed6-67b7-4cc7-a5e2-145f27edb08e" containerName="mariadb-account-delete" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.630222 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff214ed6-67b7-4cc7-a5e2-145f27edb08e" containerName="mariadb-account-delete" Jan 27 13:34:11 crc kubenswrapper[4786]: E0127 13:34:11.630244 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7586855-a8f2-4069-8104-78da30bffbb3" containerName="mariadb-account-delete" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.630252 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7586855-a8f2-4069-8104-78da30bffbb3" containerName="mariadb-account-delete" Jan 27 13:34:11 crc kubenswrapper[4786]: E0127 13:34:11.630264 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701d1213-1604-4347-989c-98f88c361f06" containerName="nova-kuttl-api-api" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.630272 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="701d1213-1604-4347-989c-98f88c361f06" containerName="nova-kuttl-api-api" Jan 27 13:34:11 crc kubenswrapper[4786]: E0127 13:34:11.630281 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78658d70-0256-4b13-9804-88758f0e33e5" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.630288 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="78658d70-0256-4b13-9804-88758f0e33e5" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:34:11 crc kubenswrapper[4786]: E0127 13:34:11.630301 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02cab41f-e7f7-411d-a37e-4406f5c7bdc0" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.630309 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="02cab41f-e7f7-411d-a37e-4406f5c7bdc0" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:34:11 crc kubenswrapper[4786]: E0127 13:34:11.630323 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4537cb47-5182-4574-bbe4-9edd9509e0f1" containerName="nova-kuttl-metadata-metadata" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.630331 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="4537cb47-5182-4574-bbe4-9edd9509e0f1" containerName="nova-kuttl-metadata-metadata" Jan 27 13:34:11 crc kubenswrapper[4786]: E0127 13:34:11.630341 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f166195-a387-40f9-b587-3e8e5c3afcf9" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.630348 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f166195-a387-40f9-b587-3e8e5c3afcf9" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 27 13:34:11 crc kubenswrapper[4786]: E0127 13:34:11.630364 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5ae447-7421-4cae-9768-29425499d2ec" containerName="mariadb-account-delete" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.630371 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5ae447-7421-4cae-9768-29425499d2ec" containerName="mariadb-account-delete" Jan 27 13:34:11 crc kubenswrapper[4786]: E0127 13:34:11.630379 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4537cb47-5182-4574-bbe4-9edd9509e0f1" containerName="nova-kuttl-metadata-log" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.630386 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="4537cb47-5182-4574-bbe4-9edd9509e0f1" containerName="nova-kuttl-metadata-log" Jan 27 13:34:11 crc kubenswrapper[4786]: E0127 13:34:11.630397 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ce96de-64df-4cfa-a871-380544ed87d2" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.630404 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ce96de-64df-4cfa-a871-380544ed87d2" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:34:11 crc kubenswrapper[4786]: E0127 13:34:11.630415 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701d1213-1604-4347-989c-98f88c361f06" containerName="nova-kuttl-api-log" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.630422 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="701d1213-1604-4347-989c-98f88c361f06" containerName="nova-kuttl-api-log" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.630598 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="4537cb47-5182-4574-bbe4-9edd9509e0f1" containerName="nova-kuttl-metadata-metadata" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.636776 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ce96de-64df-4cfa-a871-380544ed87d2" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.636823 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="701d1213-1604-4347-989c-98f88c361f06" containerName="nova-kuttl-api-log" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.636856 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="701d1213-1604-4347-989c-98f88c361f06" containerName="nova-kuttl-api-api" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.636873 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="4537cb47-5182-4574-bbe4-9edd9509e0f1" containerName="nova-kuttl-metadata-log" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.636913 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="78658d70-0256-4b13-9804-88758f0e33e5" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.636925 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7586855-a8f2-4069-8104-78da30bffbb3" containerName="mariadb-account-delete" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.636964 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f166195-a387-40f9-b587-3e8e5c3afcf9" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.636980 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="02cab41f-e7f7-411d-a37e-4406f5c7bdc0" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.636995 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5ae447-7421-4cae-9768-29425499d2ec" containerName="mariadb-account-delete" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.637005 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff214ed6-67b7-4cc7-a5e2-145f27edb08e" containerName="mariadb-account-delete" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.642408 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-q5zbn" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.656885 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-q5zbn"] Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.664008 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7266df78-7efe-4eaa-b6d0-303fde74806c-operator-scripts\") pod \"nova-api-db-create-q5zbn\" (UID: \"7266df78-7efe-4eaa-b6d0-303fde74806c\") " pod="nova-kuttl-default/nova-api-db-create-q5zbn" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.664099 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spjlg\" (UniqueName: \"kubernetes.io/projected/7266df78-7efe-4eaa-b6d0-303fde74806c-kube-api-access-spjlg\") pod \"nova-api-db-create-q5zbn\" (UID: \"7266df78-7efe-4eaa-b6d0-303fde74806c\") " pod="nova-kuttl-default/nova-api-db-create-q5zbn" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.729488 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-75s89"] Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.730865 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-75s89" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.758646 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-75s89"] Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.765889 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a42ba7b0-9548-4d04-93ac-4e55c86dfa07-operator-scripts\") pod \"nova-cell0-db-create-75s89\" (UID: \"a42ba7b0-9548-4d04-93ac-4e55c86dfa07\") " pod="nova-kuttl-default/nova-cell0-db-create-75s89" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.766446 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt5n6\" (UniqueName: \"kubernetes.io/projected/a42ba7b0-9548-4d04-93ac-4e55c86dfa07-kube-api-access-jt5n6\") pod \"nova-cell0-db-create-75s89\" (UID: \"a42ba7b0-9548-4d04-93ac-4e55c86dfa07\") " pod="nova-kuttl-default/nova-cell0-db-create-75s89" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.766673 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7266df78-7efe-4eaa-b6d0-303fde74806c-operator-scripts\") pod \"nova-api-db-create-q5zbn\" (UID: \"7266df78-7efe-4eaa-b6d0-303fde74806c\") " pod="nova-kuttl-default/nova-api-db-create-q5zbn" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.766991 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spjlg\" (UniqueName: \"kubernetes.io/projected/7266df78-7efe-4eaa-b6d0-303fde74806c-kube-api-access-spjlg\") pod \"nova-api-db-create-q5zbn\" (UID: \"7266df78-7efe-4eaa-b6d0-303fde74806c\") " pod="nova-kuttl-default/nova-api-db-create-q5zbn" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.768831 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7266df78-7efe-4eaa-b6d0-303fde74806c-operator-scripts\") pod \"nova-api-db-create-q5zbn\" (UID: \"7266df78-7efe-4eaa-b6d0-303fde74806c\") " pod="nova-kuttl-default/nova-api-db-create-q5zbn" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.793420 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spjlg\" (UniqueName: \"kubernetes.io/projected/7266df78-7efe-4eaa-b6d0-303fde74806c-kube-api-access-spjlg\") pod \"nova-api-db-create-q5zbn\" (UID: \"7266df78-7efe-4eaa-b6d0-303fde74806c\") " pod="nova-kuttl-default/nova-api-db-create-q5zbn" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.843693 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v"] Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.845780 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.848621 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.871109 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad89564-d8c3-47fe-a44d-5c26725192aa-operator-scripts\") pod \"nova-api-d9c7-account-create-update-jc66v\" (UID: \"4ad89564-d8c3-47fe-a44d-5c26725192aa\") " pod="nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.871360 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgv9g\" (UniqueName: \"kubernetes.io/projected/4ad89564-d8c3-47fe-a44d-5c26725192aa-kube-api-access-qgv9g\") pod \"nova-api-d9c7-account-create-update-jc66v\" (UID: \"4ad89564-d8c3-47fe-a44d-5c26725192aa\") " pod="nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.871449 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a42ba7b0-9548-4d04-93ac-4e55c86dfa07-operator-scripts\") pod \"nova-cell0-db-create-75s89\" (UID: \"a42ba7b0-9548-4d04-93ac-4e55c86dfa07\") " pod="nova-kuttl-default/nova-cell0-db-create-75s89" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.871536 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt5n6\" (UniqueName: \"kubernetes.io/projected/a42ba7b0-9548-4d04-93ac-4e55c86dfa07-kube-api-access-jt5n6\") pod \"nova-cell0-db-create-75s89\" (UID: \"a42ba7b0-9548-4d04-93ac-4e55c86dfa07\") " pod="nova-kuttl-default/nova-cell0-db-create-75s89" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.872567 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a42ba7b0-9548-4d04-93ac-4e55c86dfa07-operator-scripts\") pod \"nova-cell0-db-create-75s89\" (UID: \"a42ba7b0-9548-4d04-93ac-4e55c86dfa07\") " pod="nova-kuttl-default/nova-cell0-db-create-75s89" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.877207 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v"] Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.891868 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt5n6\" (UniqueName: \"kubernetes.io/projected/a42ba7b0-9548-4d04-93ac-4e55c86dfa07-kube-api-access-jt5n6\") pod \"nova-cell0-db-create-75s89\" (UID: \"a42ba7b0-9548-4d04-93ac-4e55c86dfa07\") " pod="nova-kuttl-default/nova-cell0-db-create-75s89" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.947227 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-fx5r8"] Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.949264 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-fx5r8" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.961237 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-fx5r8"] Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.964807 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-q5zbn" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.973832 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad89564-d8c3-47fe-a44d-5c26725192aa-operator-scripts\") pod \"nova-api-d9c7-account-create-update-jc66v\" (UID: \"4ad89564-d8c3-47fe-a44d-5c26725192aa\") " pod="nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.976074 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27w8g\" (UniqueName: \"kubernetes.io/projected/f3df647e-a748-4d18-933a-daaf2bfad557-kube-api-access-27w8g\") pod \"nova-cell1-db-create-fx5r8\" (UID: \"f3df647e-a748-4d18-933a-daaf2bfad557\") " pod="nova-kuttl-default/nova-cell1-db-create-fx5r8" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.976348 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgv9g\" (UniqueName: \"kubernetes.io/projected/4ad89564-d8c3-47fe-a44d-5c26725192aa-kube-api-access-qgv9g\") pod \"nova-api-d9c7-account-create-update-jc66v\" (UID: \"4ad89564-d8c3-47fe-a44d-5c26725192aa\") " pod="nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.976583 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3df647e-a748-4d18-933a-daaf2bfad557-operator-scripts\") pod \"nova-cell1-db-create-fx5r8\" (UID: \"f3df647e-a748-4d18-933a-daaf2bfad557\") " pod="nova-kuttl-default/nova-cell1-db-create-fx5r8" Jan 27 13:34:11 crc kubenswrapper[4786]: I0127 13:34:11.975770 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad89564-d8c3-47fe-a44d-5c26725192aa-operator-scripts\") pod \"nova-api-d9c7-account-create-update-jc66v\" (UID: \"4ad89564-d8c3-47fe-a44d-5c26725192aa\") " pod="nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.007595 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgv9g\" (UniqueName: \"kubernetes.io/projected/4ad89564-d8c3-47fe-a44d-5c26725192aa-kube-api-access-qgv9g\") pod \"nova-api-d9c7-account-create-update-jc66v\" (UID: \"4ad89564-d8c3-47fe-a44d-5c26725192aa\") " pod="nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.048145 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4"] Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.049169 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.052667 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.052850 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-75s89" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.060251 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4"] Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.078336 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3df647e-a748-4d18-933a-daaf2bfad557-operator-scripts\") pod \"nova-cell1-db-create-fx5r8\" (UID: \"f3df647e-a748-4d18-933a-daaf2bfad557\") " pod="nova-kuttl-default/nova-cell1-db-create-fx5r8" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.078440 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pds8s\" (UniqueName: \"kubernetes.io/projected/22a9a4ad-6d8d-48f7-9b01-625b9a2f289e-kube-api-access-pds8s\") pod \"nova-cell0-32e8-account-create-update-zxns4\" (UID: \"22a9a4ad-6d8d-48f7-9b01-625b9a2f289e\") " pod="nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.078487 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22a9a4ad-6d8d-48f7-9b01-625b9a2f289e-operator-scripts\") pod \"nova-cell0-32e8-account-create-update-zxns4\" (UID: \"22a9a4ad-6d8d-48f7-9b01-625b9a2f289e\") " pod="nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.078509 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27w8g\" (UniqueName: \"kubernetes.io/projected/f3df647e-a748-4d18-933a-daaf2bfad557-kube-api-access-27w8g\") pod \"nova-cell1-db-create-fx5r8\" (UID: \"f3df647e-a748-4d18-933a-daaf2bfad557\") " pod="nova-kuttl-default/nova-cell1-db-create-fx5r8" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.079437 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3df647e-a748-4d18-933a-daaf2bfad557-operator-scripts\") pod \"nova-cell1-db-create-fx5r8\" (UID: \"f3df647e-a748-4d18-933a-daaf2bfad557\") " pod="nova-kuttl-default/nova-cell1-db-create-fx5r8" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.104553 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27w8g\" (UniqueName: \"kubernetes.io/projected/f3df647e-a748-4d18-933a-daaf2bfad557-kube-api-access-27w8g\") pod \"nova-cell1-db-create-fx5r8\" (UID: \"f3df647e-a748-4d18-933a-daaf2bfad557\") " pod="nova-kuttl-default/nova-cell1-db-create-fx5r8" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.171089 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.179701 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pds8s\" (UniqueName: \"kubernetes.io/projected/22a9a4ad-6d8d-48f7-9b01-625b9a2f289e-kube-api-access-pds8s\") pod \"nova-cell0-32e8-account-create-update-zxns4\" (UID: \"22a9a4ad-6d8d-48f7-9b01-625b9a2f289e\") " pod="nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.179761 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22a9a4ad-6d8d-48f7-9b01-625b9a2f289e-operator-scripts\") pod \"nova-cell0-32e8-account-create-update-zxns4\" (UID: \"22a9a4ad-6d8d-48f7-9b01-625b9a2f289e\") " pod="nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.180856 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22a9a4ad-6d8d-48f7-9b01-625b9a2f289e-operator-scripts\") pod \"nova-cell0-32e8-account-create-update-zxns4\" (UID: \"22a9a4ad-6d8d-48f7-9b01-625b9a2f289e\") " pod="nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.203816 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pds8s\" (UniqueName: \"kubernetes.io/projected/22a9a4ad-6d8d-48f7-9b01-625b9a2f289e-kube-api-access-pds8s\") pod \"nova-cell0-32e8-account-create-update-zxns4\" (UID: \"22a9a4ad-6d8d-48f7-9b01-625b9a2f289e\") " pod="nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.245247 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh"] Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.246637 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.251271 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.267122 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh"] Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.270592 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-fx5r8" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.280429 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv2jj\" (UniqueName: \"kubernetes.io/projected/12763f09-4dc2-4040-a41d-69da50af4ad8-kube-api-access-qv2jj\") pod \"nova-cell1-3f48-account-create-update-jhxjh\" (UID: \"12763f09-4dc2-4040-a41d-69da50af4ad8\") " pod="nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.280545 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12763f09-4dc2-4040-a41d-69da50af4ad8-operator-scripts\") pod \"nova-cell1-3f48-account-create-update-jhxjh\" (UID: \"12763f09-4dc2-4040-a41d-69da50af4ad8\") " pod="nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.382174 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12763f09-4dc2-4040-a41d-69da50af4ad8-operator-scripts\") pod \"nova-cell1-3f48-account-create-update-jhxjh\" (UID: \"12763f09-4dc2-4040-a41d-69da50af4ad8\") " pod="nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.382239 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv2jj\" (UniqueName: \"kubernetes.io/projected/12763f09-4dc2-4040-a41d-69da50af4ad8-kube-api-access-qv2jj\") pod \"nova-cell1-3f48-account-create-update-jhxjh\" (UID: \"12763f09-4dc2-4040-a41d-69da50af4ad8\") " pod="nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.383563 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12763f09-4dc2-4040-a41d-69da50af4ad8-operator-scripts\") pod \"nova-cell1-3f48-account-create-update-jhxjh\" (UID: \"12763f09-4dc2-4040-a41d-69da50af4ad8\") " pod="nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.386032 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.401598 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv2jj\" (UniqueName: \"kubernetes.io/projected/12763f09-4dc2-4040-a41d-69da50af4ad8-kube-api-access-qv2jj\") pod \"nova-cell1-3f48-account-create-update-jhxjh\" (UID: \"12763f09-4dc2-4040-a41d-69da50af4ad8\") " pod="nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.557229 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-q5zbn"] Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.575863 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh" Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.635387 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-75s89"] Jan 27 13:34:12 crc kubenswrapper[4786]: W0127 13:34:12.653855 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda42ba7b0_9548_4d04_93ac_4e55c86dfa07.slice/crio-74a0b2ab3bf71329e70e025fbb6276d310acbf9de7cec2f26bc010d0f7c30572 WatchSource:0}: Error finding container 74a0b2ab3bf71329e70e025fbb6276d310acbf9de7cec2f26bc010d0f7c30572: Status 404 returned error can't find the container with id 74a0b2ab3bf71329e70e025fbb6276d310acbf9de7cec2f26bc010d0f7c30572 Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.755723 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v"] Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.762981 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-fx5r8"] Jan 27 13:34:12 crc kubenswrapper[4786]: W0127 13:34:12.773348 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3df647e_a748_4d18_933a_daaf2bfad557.slice/crio-8b876a32cc3e9d5eb96465cbe68831db1f49fc9d5098cf7ea3f0bc18481c0ee9 WatchSource:0}: Error finding container 8b876a32cc3e9d5eb96465cbe68831db1f49fc9d5098cf7ea3f0bc18481c0ee9: Status 404 returned error can't find the container with id 8b876a32cc3e9d5eb96465cbe68831db1f49fc9d5098cf7ea3f0bc18481c0ee9 Jan 27 13:34:12 crc kubenswrapper[4786]: W0127 13:34:12.794670 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ad89564_d8c3_47fe_a44d_5c26725192aa.slice/crio-b9dc785ad9145933c796ad85e239de5f34984a7b5ce2946a539d7a4a93fb71f8 WatchSource:0}: Error finding container b9dc785ad9145933c796ad85e239de5f34984a7b5ce2946a539d7a4a93fb71f8: Status 404 returned error can't find the container with id b9dc785ad9145933c796ad85e239de5f34984a7b5ce2946a539d7a4a93fb71f8 Jan 27 13:34:12 crc kubenswrapper[4786]: I0127 13:34:12.914132 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4"] Jan 27 13:34:12 crc kubenswrapper[4786]: W0127 13:34:12.924305 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22a9a4ad_6d8d_48f7_9b01_625b9a2f289e.slice/crio-f80efd119a9788d2af2a0b74cfa7250df06d11a91b1945cbbb43ca7838e2f8d1 WatchSource:0}: Error finding container f80efd119a9788d2af2a0b74cfa7250df06d11a91b1945cbbb43ca7838e2f8d1: Status 404 returned error can't find the container with id f80efd119a9788d2af2a0b74cfa7250df06d11a91b1945cbbb43ca7838e2f8d1 Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.073066 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh"] Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.546700 4786 generic.go:334] "Generic (PLEG): container finished" podID="12763f09-4dc2-4040-a41d-69da50af4ad8" containerID="bdd86bc29ccd6afdb3efaac74980f39bf86e6f010822c5ac4fb1df9dc2e6102a" exitCode=0 Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.546964 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh" event={"ID":"12763f09-4dc2-4040-a41d-69da50af4ad8","Type":"ContainerDied","Data":"bdd86bc29ccd6afdb3efaac74980f39bf86e6f010822c5ac4fb1df9dc2e6102a"} Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.547027 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh" event={"ID":"12763f09-4dc2-4040-a41d-69da50af4ad8","Type":"ContainerStarted","Data":"2b5fa1b9f955c9cb2ccfe5b2bb341f58b26aa9ea3f89f0a55f0a41f1c466d321"} Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.548692 4786 generic.go:334] "Generic (PLEG): container finished" podID="a42ba7b0-9548-4d04-93ac-4e55c86dfa07" containerID="20361a142a4f796647a628117b87c11b058d26cba7625f4f0d9c6430641a5e60" exitCode=0 Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.548739 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-75s89" event={"ID":"a42ba7b0-9548-4d04-93ac-4e55c86dfa07","Type":"ContainerDied","Data":"20361a142a4f796647a628117b87c11b058d26cba7625f4f0d9c6430641a5e60"} Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.548902 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-75s89" event={"ID":"a42ba7b0-9548-4d04-93ac-4e55c86dfa07","Type":"ContainerStarted","Data":"74a0b2ab3bf71329e70e025fbb6276d310acbf9de7cec2f26bc010d0f7c30572"} Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.554325 4786 generic.go:334] "Generic (PLEG): container finished" podID="22a9a4ad-6d8d-48f7-9b01-625b9a2f289e" containerID="e1dc671ff55f64b496f63a9fbbc84a50742ef5ca38387bd931e2e8f01f0ce460" exitCode=0 Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.554412 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4" event={"ID":"22a9a4ad-6d8d-48f7-9b01-625b9a2f289e","Type":"ContainerDied","Data":"e1dc671ff55f64b496f63a9fbbc84a50742ef5ca38387bd931e2e8f01f0ce460"} Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.554450 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4" event={"ID":"22a9a4ad-6d8d-48f7-9b01-625b9a2f289e","Type":"ContainerStarted","Data":"f80efd119a9788d2af2a0b74cfa7250df06d11a91b1945cbbb43ca7838e2f8d1"} Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.556388 4786 generic.go:334] "Generic (PLEG): container finished" podID="7266df78-7efe-4eaa-b6d0-303fde74806c" containerID="63355997a7f7dbb77d495c3c9d68ec163a663716f067441326a30fea0b9b1c5b" exitCode=0 Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.556511 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-q5zbn" event={"ID":"7266df78-7efe-4eaa-b6d0-303fde74806c","Type":"ContainerDied","Data":"63355997a7f7dbb77d495c3c9d68ec163a663716f067441326a30fea0b9b1c5b"} Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.556736 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-q5zbn" event={"ID":"7266df78-7efe-4eaa-b6d0-303fde74806c","Type":"ContainerStarted","Data":"0dd4d0459b6b5e2099456b9b4bdc7e4e8d7b18966dea4a23556c68bc44393baf"} Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.561116 4786 generic.go:334] "Generic (PLEG): container finished" podID="4ad89564-d8c3-47fe-a44d-5c26725192aa" containerID="1f73b1e2c2c8dc08b8a0e64743aa5b7bb37d369a106b64ac7dba2966f385f0ba" exitCode=0 Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.561207 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v" event={"ID":"4ad89564-d8c3-47fe-a44d-5c26725192aa","Type":"ContainerDied","Data":"1f73b1e2c2c8dc08b8a0e64743aa5b7bb37d369a106b64ac7dba2966f385f0ba"} Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.561229 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v" event={"ID":"4ad89564-d8c3-47fe-a44d-5c26725192aa","Type":"ContainerStarted","Data":"b9dc785ad9145933c796ad85e239de5f34984a7b5ce2946a539d7a4a93fb71f8"} Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.566624 4786 generic.go:334] "Generic (PLEG): container finished" podID="f3df647e-a748-4d18-933a-daaf2bfad557" containerID="d4bfc1c5e8bba5e7a80de5dbc3a87cc974cc68affe27f4446e153d432c328d18" exitCode=0 Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.566679 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-fx5r8" event={"ID":"f3df647e-a748-4d18-933a-daaf2bfad557","Type":"ContainerDied","Data":"d4bfc1c5e8bba5e7a80de5dbc3a87cc974cc68affe27f4446e153d432c328d18"} Jan 27 13:34:13 crc kubenswrapper[4786]: I0127 13:34:13.566707 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-fx5r8" event={"ID":"f3df647e-a748-4d18-933a-daaf2bfad557","Type":"ContainerStarted","Data":"8b876a32cc3e9d5eb96465cbe68831db1f49fc9d5098cf7ea3f0bc18481c0ee9"} Jan 27 13:34:14 crc kubenswrapper[4786]: I0127 13:34:14.941430 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-75s89" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.029351 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt5n6\" (UniqueName: \"kubernetes.io/projected/a42ba7b0-9548-4d04-93ac-4e55c86dfa07-kube-api-access-jt5n6\") pod \"a42ba7b0-9548-4d04-93ac-4e55c86dfa07\" (UID: \"a42ba7b0-9548-4d04-93ac-4e55c86dfa07\") " Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.029517 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a42ba7b0-9548-4d04-93ac-4e55c86dfa07-operator-scripts\") pod \"a42ba7b0-9548-4d04-93ac-4e55c86dfa07\" (UID: \"a42ba7b0-9548-4d04-93ac-4e55c86dfa07\") " Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.030728 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a42ba7b0-9548-4d04-93ac-4e55c86dfa07-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a42ba7b0-9548-4d04-93ac-4e55c86dfa07" (UID: "a42ba7b0-9548-4d04-93ac-4e55c86dfa07"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.036525 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a42ba7b0-9548-4d04-93ac-4e55c86dfa07-kube-api-access-jt5n6" (OuterVolumeSpecName: "kube-api-access-jt5n6") pod "a42ba7b0-9548-4d04-93ac-4e55c86dfa07" (UID: "a42ba7b0-9548-4d04-93ac-4e55c86dfa07"). InnerVolumeSpecName "kube-api-access-jt5n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.131888 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt5n6\" (UniqueName: \"kubernetes.io/projected/a42ba7b0-9548-4d04-93ac-4e55c86dfa07-kube-api-access-jt5n6\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.131921 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a42ba7b0-9548-4d04-93ac-4e55c86dfa07-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.211003 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.217922 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.231546 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.242479 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-fx5r8" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.259302 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-q5zbn" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.333927 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22a9a4ad-6d8d-48f7-9b01-625b9a2f289e-operator-scripts\") pod \"22a9a4ad-6d8d-48f7-9b01-625b9a2f289e\" (UID: \"22a9a4ad-6d8d-48f7-9b01-625b9a2f289e\") " Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.334845 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad89564-d8c3-47fe-a44d-5c26725192aa-operator-scripts\") pod \"4ad89564-d8c3-47fe-a44d-5c26725192aa\" (UID: \"4ad89564-d8c3-47fe-a44d-5c26725192aa\") " Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.334892 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spjlg\" (UniqueName: \"kubernetes.io/projected/7266df78-7efe-4eaa-b6d0-303fde74806c-kube-api-access-spjlg\") pod \"7266df78-7efe-4eaa-b6d0-303fde74806c\" (UID: \"7266df78-7efe-4eaa-b6d0-303fde74806c\") " Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.334954 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgv9g\" (UniqueName: \"kubernetes.io/projected/4ad89564-d8c3-47fe-a44d-5c26725192aa-kube-api-access-qgv9g\") pod \"4ad89564-d8c3-47fe-a44d-5c26725192aa\" (UID: \"4ad89564-d8c3-47fe-a44d-5c26725192aa\") " Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.334998 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12763f09-4dc2-4040-a41d-69da50af4ad8-operator-scripts\") pod \"12763f09-4dc2-4040-a41d-69da50af4ad8\" (UID: \"12763f09-4dc2-4040-a41d-69da50af4ad8\") " Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.335032 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv2jj\" (UniqueName: \"kubernetes.io/projected/12763f09-4dc2-4040-a41d-69da50af4ad8-kube-api-access-qv2jj\") pod \"12763f09-4dc2-4040-a41d-69da50af4ad8\" (UID: \"12763f09-4dc2-4040-a41d-69da50af4ad8\") " Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.335083 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3df647e-a748-4d18-933a-daaf2bfad557-operator-scripts\") pod \"f3df647e-a748-4d18-933a-daaf2bfad557\" (UID: \"f3df647e-a748-4d18-933a-daaf2bfad557\") " Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.335126 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pds8s\" (UniqueName: \"kubernetes.io/projected/22a9a4ad-6d8d-48f7-9b01-625b9a2f289e-kube-api-access-pds8s\") pod \"22a9a4ad-6d8d-48f7-9b01-625b9a2f289e\" (UID: \"22a9a4ad-6d8d-48f7-9b01-625b9a2f289e\") " Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.335178 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7266df78-7efe-4eaa-b6d0-303fde74806c-operator-scripts\") pod \"7266df78-7efe-4eaa-b6d0-303fde74806c\" (UID: \"7266df78-7efe-4eaa-b6d0-303fde74806c\") " Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.335225 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27w8g\" (UniqueName: \"kubernetes.io/projected/f3df647e-a748-4d18-933a-daaf2bfad557-kube-api-access-27w8g\") pod \"f3df647e-a748-4d18-933a-daaf2bfad557\" (UID: \"f3df647e-a748-4d18-933a-daaf2bfad557\") " Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.334998 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22a9a4ad-6d8d-48f7-9b01-625b9a2f289e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22a9a4ad-6d8d-48f7-9b01-625b9a2f289e" (UID: "22a9a4ad-6d8d-48f7-9b01-625b9a2f289e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.336041 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ad89564-d8c3-47fe-a44d-5c26725192aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ad89564-d8c3-47fe-a44d-5c26725192aa" (UID: "4ad89564-d8c3-47fe-a44d-5c26725192aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.336528 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12763f09-4dc2-4040-a41d-69da50af4ad8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12763f09-4dc2-4040-a41d-69da50af4ad8" (UID: "12763f09-4dc2-4040-a41d-69da50af4ad8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.336989 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7266df78-7efe-4eaa-b6d0-303fde74806c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7266df78-7efe-4eaa-b6d0-303fde74806c" (UID: "7266df78-7efe-4eaa-b6d0-303fde74806c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.337373 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3df647e-a748-4d18-933a-daaf2bfad557-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3df647e-a748-4d18-933a-daaf2bfad557" (UID: "f3df647e-a748-4d18-933a-daaf2bfad557"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.339575 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22a9a4ad-6d8d-48f7-9b01-625b9a2f289e-kube-api-access-pds8s" (OuterVolumeSpecName: "kube-api-access-pds8s") pod "22a9a4ad-6d8d-48f7-9b01-625b9a2f289e" (UID: "22a9a4ad-6d8d-48f7-9b01-625b9a2f289e"). InnerVolumeSpecName "kube-api-access-pds8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.339658 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12763f09-4dc2-4040-a41d-69da50af4ad8-kube-api-access-qv2jj" (OuterVolumeSpecName: "kube-api-access-qv2jj") pod "12763f09-4dc2-4040-a41d-69da50af4ad8" (UID: "12763f09-4dc2-4040-a41d-69da50af4ad8"). InnerVolumeSpecName "kube-api-access-qv2jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.339920 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad89564-d8c3-47fe-a44d-5c26725192aa-kube-api-access-qgv9g" (OuterVolumeSpecName: "kube-api-access-qgv9g") pod "4ad89564-d8c3-47fe-a44d-5c26725192aa" (UID: "4ad89564-d8c3-47fe-a44d-5c26725192aa"). InnerVolumeSpecName "kube-api-access-qgv9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.340087 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3df647e-a748-4d18-933a-daaf2bfad557-kube-api-access-27w8g" (OuterVolumeSpecName: "kube-api-access-27w8g") pod "f3df647e-a748-4d18-933a-daaf2bfad557" (UID: "f3df647e-a748-4d18-933a-daaf2bfad557"). InnerVolumeSpecName "kube-api-access-27w8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.340593 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7266df78-7efe-4eaa-b6d0-303fde74806c-kube-api-access-spjlg" (OuterVolumeSpecName: "kube-api-access-spjlg") pod "7266df78-7efe-4eaa-b6d0-303fde74806c" (UID: "7266df78-7efe-4eaa-b6d0-303fde74806c"). InnerVolumeSpecName "kube-api-access-spjlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.438184 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad89564-d8c3-47fe-a44d-5c26725192aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.438224 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spjlg\" (UniqueName: \"kubernetes.io/projected/7266df78-7efe-4eaa-b6d0-303fde74806c-kube-api-access-spjlg\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.438235 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgv9g\" (UniqueName: \"kubernetes.io/projected/4ad89564-d8c3-47fe-a44d-5c26725192aa-kube-api-access-qgv9g\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.438244 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12763f09-4dc2-4040-a41d-69da50af4ad8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.438253 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv2jj\" (UniqueName: \"kubernetes.io/projected/12763f09-4dc2-4040-a41d-69da50af4ad8-kube-api-access-qv2jj\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.438262 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3df647e-a748-4d18-933a-daaf2bfad557-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.438270 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pds8s\" (UniqueName: \"kubernetes.io/projected/22a9a4ad-6d8d-48f7-9b01-625b9a2f289e-kube-api-access-pds8s\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.438279 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7266df78-7efe-4eaa-b6d0-303fde74806c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.438289 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27w8g\" (UniqueName: \"kubernetes.io/projected/f3df647e-a748-4d18-933a-daaf2bfad557-kube-api-access-27w8g\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.438297 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22a9a4ad-6d8d-48f7-9b01-625b9a2f289e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.593015 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-q5zbn" event={"ID":"7266df78-7efe-4eaa-b6d0-303fde74806c","Type":"ContainerDied","Data":"0dd4d0459b6b5e2099456b9b4bdc7e4e8d7b18966dea4a23556c68bc44393baf"} Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.593411 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dd4d0459b6b5e2099456b9b4bdc7e4e8d7b18966dea4a23556c68bc44393baf" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.593296 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-q5zbn" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.595101 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.595123 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v" event={"ID":"4ad89564-d8c3-47fe-a44d-5c26725192aa","Type":"ContainerDied","Data":"b9dc785ad9145933c796ad85e239de5f34984a7b5ce2946a539d7a4a93fb71f8"} Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.595142 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9dc785ad9145933c796ad85e239de5f34984a7b5ce2946a539d7a4a93fb71f8" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.596473 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-fx5r8" event={"ID":"f3df647e-a748-4d18-933a-daaf2bfad557","Type":"ContainerDied","Data":"8b876a32cc3e9d5eb96465cbe68831db1f49fc9d5098cf7ea3f0bc18481c0ee9"} Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.596491 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-fx5r8" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.596499 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b876a32cc3e9d5eb96465cbe68831db1f49fc9d5098cf7ea3f0bc18481c0ee9" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.597627 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh" event={"ID":"12763f09-4dc2-4040-a41d-69da50af4ad8","Type":"ContainerDied","Data":"2b5fa1b9f955c9cb2ccfe5b2bb341f58b26aa9ea3f89f0a55f0a41f1c466d321"} Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.597653 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b5fa1b9f955c9cb2ccfe5b2bb341f58b26aa9ea3f89f0a55f0a41f1c466d321" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.597634 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.599480 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-75s89" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.599481 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-75s89" event={"ID":"a42ba7b0-9548-4d04-93ac-4e55c86dfa07","Type":"ContainerDied","Data":"74a0b2ab3bf71329e70e025fbb6276d310acbf9de7cec2f26bc010d0f7c30572"} Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.599638 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74a0b2ab3bf71329e70e025fbb6276d310acbf9de7cec2f26bc010d0f7c30572" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.601187 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4" event={"ID":"22a9a4ad-6d8d-48f7-9b01-625b9a2f289e","Type":"ContainerDied","Data":"f80efd119a9788d2af2a0b74cfa7250df06d11a91b1945cbbb43ca7838e2f8d1"} Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.601319 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f80efd119a9788d2af2a0b74cfa7250df06d11a91b1945cbbb43ca7838e2f8d1" Jan 27 13:34:15 crc kubenswrapper[4786]: I0127 13:34:15.601400 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.456020 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6"] Jan 27 13:34:17 crc kubenswrapper[4786]: E0127 13:34:17.456381 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22a9a4ad-6d8d-48f7-9b01-625b9a2f289e" containerName="mariadb-account-create-update" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.456393 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="22a9a4ad-6d8d-48f7-9b01-625b9a2f289e" containerName="mariadb-account-create-update" Jan 27 13:34:17 crc kubenswrapper[4786]: E0127 13:34:17.456406 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7266df78-7efe-4eaa-b6d0-303fde74806c" containerName="mariadb-database-create" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.456412 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7266df78-7efe-4eaa-b6d0-303fde74806c" containerName="mariadb-database-create" Jan 27 13:34:17 crc kubenswrapper[4786]: E0127 13:34:17.456421 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad89564-d8c3-47fe-a44d-5c26725192aa" containerName="mariadb-account-create-update" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.456428 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad89564-d8c3-47fe-a44d-5c26725192aa" containerName="mariadb-account-create-update" Jan 27 13:34:17 crc kubenswrapper[4786]: E0127 13:34:17.456438 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3df647e-a748-4d18-933a-daaf2bfad557" containerName="mariadb-database-create" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.456444 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3df647e-a748-4d18-933a-daaf2bfad557" containerName="mariadb-database-create" Jan 27 13:34:17 crc kubenswrapper[4786]: E0127 13:34:17.456453 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12763f09-4dc2-4040-a41d-69da50af4ad8" containerName="mariadb-account-create-update" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.456459 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="12763f09-4dc2-4040-a41d-69da50af4ad8" containerName="mariadb-account-create-update" Jan 27 13:34:17 crc kubenswrapper[4786]: E0127 13:34:17.456473 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a42ba7b0-9548-4d04-93ac-4e55c86dfa07" containerName="mariadb-database-create" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.456479 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a42ba7b0-9548-4d04-93ac-4e55c86dfa07" containerName="mariadb-database-create" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.456644 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="12763f09-4dc2-4040-a41d-69da50af4ad8" containerName="mariadb-account-create-update" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.456657 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3df647e-a748-4d18-933a-daaf2bfad557" containerName="mariadb-database-create" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.456667 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad89564-d8c3-47fe-a44d-5c26725192aa" containerName="mariadb-account-create-update" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.456675 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="22a9a4ad-6d8d-48f7-9b01-625b9a2f289e" containerName="mariadb-account-create-update" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.456684 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="7266df78-7efe-4eaa-b6d0-303fde74806c" containerName="mariadb-database-create" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.456694 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a42ba7b0-9548-4d04-93ac-4e55c86dfa07" containerName="mariadb-database-create" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.457248 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.462851 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-g2xq9" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.462853 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.463065 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.481793 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6"] Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.579496 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbab3f0a-3fee-457b-a387-1394a96d3847-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-jrnz6\" (UID: \"fbab3f0a-3fee-457b-a387-1394a96d3847\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.579575 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jxgt\" (UniqueName: \"kubernetes.io/projected/fbab3f0a-3fee-457b-a387-1394a96d3847-kube-api-access-2jxgt\") pod \"nova-kuttl-cell0-conductor-db-sync-jrnz6\" (UID: \"fbab3f0a-3fee-457b-a387-1394a96d3847\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.579690 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbab3f0a-3fee-457b-a387-1394a96d3847-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-jrnz6\" (UID: \"fbab3f0a-3fee-457b-a387-1394a96d3847\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.681350 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbab3f0a-3fee-457b-a387-1394a96d3847-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-jrnz6\" (UID: \"fbab3f0a-3fee-457b-a387-1394a96d3847\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.681412 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jxgt\" (UniqueName: \"kubernetes.io/projected/fbab3f0a-3fee-457b-a387-1394a96d3847-kube-api-access-2jxgt\") pod \"nova-kuttl-cell0-conductor-db-sync-jrnz6\" (UID: \"fbab3f0a-3fee-457b-a387-1394a96d3847\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.681477 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbab3f0a-3fee-457b-a387-1394a96d3847-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-jrnz6\" (UID: \"fbab3f0a-3fee-457b-a387-1394a96d3847\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.686213 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbab3f0a-3fee-457b-a387-1394a96d3847-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-jrnz6\" (UID: \"fbab3f0a-3fee-457b-a387-1394a96d3847\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.686335 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbab3f0a-3fee-457b-a387-1394a96d3847-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-jrnz6\" (UID: \"fbab3f0a-3fee-457b-a387-1394a96d3847\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.701036 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jxgt\" (UniqueName: \"kubernetes.io/projected/fbab3f0a-3fee-457b-a387-1394a96d3847-kube-api-access-2jxgt\") pod \"nova-kuttl-cell0-conductor-db-sync-jrnz6\" (UID: \"fbab3f0a-3fee-457b-a387-1394a96d3847\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" Jan 27 13:34:17 crc kubenswrapper[4786]: I0127 13:34:17.781985 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" Jan 27 13:34:18 crc kubenswrapper[4786]: I0127 13:34:18.264423 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6"] Jan 27 13:34:18 crc kubenswrapper[4786]: I0127 13:34:18.628482 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" event={"ID":"fbab3f0a-3fee-457b-a387-1394a96d3847","Type":"ContainerStarted","Data":"86528360438e8106ed778f3c5df213aeebea35ac07fe4c6f3a819e8e5c345ebf"} Jan 27 13:34:18 crc kubenswrapper[4786]: I0127 13:34:18.628557 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" event={"ID":"fbab3f0a-3fee-457b-a387-1394a96d3847","Type":"ContainerStarted","Data":"2ed36f8db59f1ee448ab117e4f685299dc954447e7fbb1e76bdb1d9dcb6896cb"} Jan 27 13:34:18 crc kubenswrapper[4786]: I0127 13:34:18.644897 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" podStartSLOduration=1.6448821009999999 podStartE2EDuration="1.644882101s" podCreationTimestamp="2026-01-27 13:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:34:18.641427537 +0000 UTC m=+1641.852041686" watchObservedRunningTime="2026-01-27 13:34:18.644882101 +0000 UTC m=+1641.855496220" Jan 27 13:34:19 crc kubenswrapper[4786]: I0127 13:34:19.465024 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:34:19 crc kubenswrapper[4786]: E0127 13:34:19.465754 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:34:23 crc kubenswrapper[4786]: I0127 13:34:23.675535 4786 generic.go:334] "Generic (PLEG): container finished" podID="fbab3f0a-3fee-457b-a387-1394a96d3847" containerID="86528360438e8106ed778f3c5df213aeebea35ac07fe4c6f3a819e8e5c345ebf" exitCode=0 Jan 27 13:34:23 crc kubenswrapper[4786]: I0127 13:34:23.675689 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" event={"ID":"fbab3f0a-3fee-457b-a387-1394a96d3847","Type":"ContainerDied","Data":"86528360438e8106ed778f3c5df213aeebea35ac07fe4c6f3a819e8e5c345ebf"} Jan 27 13:34:24 crc kubenswrapper[4786]: I0127 13:34:24.971231 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.103459 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbab3f0a-3fee-457b-a387-1394a96d3847-config-data\") pod \"fbab3f0a-3fee-457b-a387-1394a96d3847\" (UID: \"fbab3f0a-3fee-457b-a387-1394a96d3847\") " Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.104156 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbab3f0a-3fee-457b-a387-1394a96d3847-scripts\") pod \"fbab3f0a-3fee-457b-a387-1394a96d3847\" (UID: \"fbab3f0a-3fee-457b-a387-1394a96d3847\") " Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.104216 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jxgt\" (UniqueName: \"kubernetes.io/projected/fbab3f0a-3fee-457b-a387-1394a96d3847-kube-api-access-2jxgt\") pod \"fbab3f0a-3fee-457b-a387-1394a96d3847\" (UID: \"fbab3f0a-3fee-457b-a387-1394a96d3847\") " Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.108696 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbab3f0a-3fee-457b-a387-1394a96d3847-scripts" (OuterVolumeSpecName: "scripts") pod "fbab3f0a-3fee-457b-a387-1394a96d3847" (UID: "fbab3f0a-3fee-457b-a387-1394a96d3847"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.108869 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbab3f0a-3fee-457b-a387-1394a96d3847-kube-api-access-2jxgt" (OuterVolumeSpecName: "kube-api-access-2jxgt") pod "fbab3f0a-3fee-457b-a387-1394a96d3847" (UID: "fbab3f0a-3fee-457b-a387-1394a96d3847"). InnerVolumeSpecName "kube-api-access-2jxgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.122838 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbab3f0a-3fee-457b-a387-1394a96d3847-config-data" (OuterVolumeSpecName: "config-data") pod "fbab3f0a-3fee-457b-a387-1394a96d3847" (UID: "fbab3f0a-3fee-457b-a387-1394a96d3847"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.205908 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbab3f0a-3fee-457b-a387-1394a96d3847-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.205952 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbab3f0a-3fee-457b-a387-1394a96d3847-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.205963 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jxgt\" (UniqueName: \"kubernetes.io/projected/fbab3f0a-3fee-457b-a387-1394a96d3847-kube-api-access-2jxgt\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.697468 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" event={"ID":"fbab3f0a-3fee-457b-a387-1394a96d3847","Type":"ContainerDied","Data":"2ed36f8db59f1ee448ab117e4f685299dc954447e7fbb1e76bdb1d9dcb6896cb"} Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.697508 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ed36f8db59f1ee448ab117e4f685299dc954447e7fbb1e76bdb1d9dcb6896cb" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.697541 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.766179 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:34:25 crc kubenswrapper[4786]: E0127 13:34:25.766540 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbab3f0a-3fee-457b-a387-1394a96d3847" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.766554 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbab3f0a-3fee-457b-a387-1394a96d3847" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.766772 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbab3f0a-3fee-457b-a387-1394a96d3847" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.767412 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.770643 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-g2xq9" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.778711 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.781704 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.817841 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/210ccb3b-41d2-4166-8326-0d44f933bd23-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"210ccb3b-41d2-4166-8326-0d44f933bd23\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.817923 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqzpz\" (UniqueName: \"kubernetes.io/projected/210ccb3b-41d2-4166-8326-0d44f933bd23-kube-api-access-rqzpz\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"210ccb3b-41d2-4166-8326-0d44f933bd23\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.919903 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/210ccb3b-41d2-4166-8326-0d44f933bd23-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"210ccb3b-41d2-4166-8326-0d44f933bd23\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.919988 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqzpz\" (UniqueName: \"kubernetes.io/projected/210ccb3b-41d2-4166-8326-0d44f933bd23-kube-api-access-rqzpz\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"210ccb3b-41d2-4166-8326-0d44f933bd23\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.924640 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/210ccb3b-41d2-4166-8326-0d44f933bd23-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"210ccb3b-41d2-4166-8326-0d44f933bd23\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:34:25 crc kubenswrapper[4786]: I0127 13:34:25.938303 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqzpz\" (UniqueName: \"kubernetes.io/projected/210ccb3b-41d2-4166-8326-0d44f933bd23-kube-api-access-rqzpz\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"210ccb3b-41d2-4166-8326-0d44f933bd23\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:34:26 crc kubenswrapper[4786]: I0127 13:34:26.118351 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:34:26 crc kubenswrapper[4786]: I0127 13:34:26.526654 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:34:26 crc kubenswrapper[4786]: I0127 13:34:26.706309 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"210ccb3b-41d2-4166-8326-0d44f933bd23","Type":"ContainerStarted","Data":"e24809012c4038884894028afc330aa68ac0ab179f0a3ce520cd8708056bb87a"} Jan 27 13:34:26 crc kubenswrapper[4786]: I0127 13:34:26.706685 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:34:26 crc kubenswrapper[4786]: I0127 13:34:26.706698 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"210ccb3b-41d2-4166-8326-0d44f933bd23","Type":"ContainerStarted","Data":"0b405b8c78e5a37556e1bc76a5d208abe7693398e45340c4e33ebbddcbd7b15d"} Jan 27 13:34:26 crc kubenswrapper[4786]: I0127 13:34:26.723597 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=1.723582022 podStartE2EDuration="1.723582022s" podCreationTimestamp="2026-01-27 13:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:34:26.721766312 +0000 UTC m=+1649.932380431" watchObservedRunningTime="2026-01-27 13:34:26.723582022 +0000 UTC m=+1649.934196141" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.142175 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.586022 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx"] Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.587264 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.597004 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx"] Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.597457 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.597684 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.702870 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e67e53fe-61a8-4a3d-aa25-75530bca5677-scripts\") pod \"nova-kuttl-cell0-cell-mapping-2xxqx\" (UID: \"e67e53fe-61a8-4a3d-aa25-75530bca5677\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.702938 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67e53fe-61a8-4a3d-aa25-75530bca5677-config-data\") pod \"nova-kuttl-cell0-cell-mapping-2xxqx\" (UID: \"e67e53fe-61a8-4a3d-aa25-75530bca5677\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.702982 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7hx6\" (UniqueName: \"kubernetes.io/projected/e67e53fe-61a8-4a3d-aa25-75530bca5677-kube-api-access-z7hx6\") pod \"nova-kuttl-cell0-cell-mapping-2xxqx\" (UID: \"e67e53fe-61a8-4a3d-aa25-75530bca5677\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.729970 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.731509 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.734858 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.748386 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.805093 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e67e53fe-61a8-4a3d-aa25-75530bca5677-scripts\") pod \"nova-kuttl-cell0-cell-mapping-2xxqx\" (UID: \"e67e53fe-61a8-4a3d-aa25-75530bca5677\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.805226 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67e53fe-61a8-4a3d-aa25-75530bca5677-config-data\") pod \"nova-kuttl-cell0-cell-mapping-2xxqx\" (UID: \"e67e53fe-61a8-4a3d-aa25-75530bca5677\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.806157 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7hx6\" (UniqueName: \"kubernetes.io/projected/e67e53fe-61a8-4a3d-aa25-75530bca5677-kube-api-access-z7hx6\") pod \"nova-kuttl-cell0-cell-mapping-2xxqx\" (UID: \"e67e53fe-61a8-4a3d-aa25-75530bca5677\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.811409 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.815204 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e67e53fe-61a8-4a3d-aa25-75530bca5677-scripts\") pod \"nova-kuttl-cell0-cell-mapping-2xxqx\" (UID: \"e67e53fe-61a8-4a3d-aa25-75530bca5677\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.815351 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67e53fe-61a8-4a3d-aa25-75530bca5677-config-data\") pod \"nova-kuttl-cell0-cell-mapping-2xxqx\" (UID: \"e67e53fe-61a8-4a3d-aa25-75530bca5677\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.825428 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.828565 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7hx6\" (UniqueName: \"kubernetes.io/projected/e67e53fe-61a8-4a3d-aa25-75530bca5677-kube-api-access-z7hx6\") pod \"nova-kuttl-cell0-cell-mapping-2xxqx\" (UID: \"e67e53fe-61a8-4a3d-aa25-75530bca5677\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.831113 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.847993 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.911019 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cafcf6be-d215-4fb1-b2fa-73e41ef77250-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"cafcf6be-d215-4fb1-b2fa-73e41ef77250\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.911082 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/989652e5-d96a-4ae1-b95c-ecea25dbae4e-logs\") pod \"nova-kuttl-api-0\" (UID: \"989652e5-d96a-4ae1-b95c-ecea25dbae4e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.911111 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmtqh\" (UniqueName: \"kubernetes.io/projected/989652e5-d96a-4ae1-b95c-ecea25dbae4e-kube-api-access-zmtqh\") pod \"nova-kuttl-api-0\" (UID: \"989652e5-d96a-4ae1-b95c-ecea25dbae4e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.911199 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbzd7\" (UniqueName: \"kubernetes.io/projected/cafcf6be-d215-4fb1-b2fa-73e41ef77250-kube-api-access-fbzd7\") pod \"nova-kuttl-metadata-0\" (UID: \"cafcf6be-d215-4fb1-b2fa-73e41ef77250\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.911224 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/989652e5-d96a-4ae1-b95c-ecea25dbae4e-config-data\") pod \"nova-kuttl-api-0\" (UID: \"989652e5-d96a-4ae1-b95c-ecea25dbae4e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.911277 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cafcf6be-d215-4fb1-b2fa-73e41ef77250-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"cafcf6be-d215-4fb1-b2fa-73e41ef77250\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.925706 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.929259 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.940169 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.959331 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.985091 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.993853 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.994933 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:34:31 crc kubenswrapper[4786]: I0127 13:34:31.999389 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.003766 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.014078 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cafcf6be-d215-4fb1-b2fa-73e41ef77250-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"cafcf6be-d215-4fb1-b2fa-73e41ef77250\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.014169 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cafcf6be-d215-4fb1-b2fa-73e41ef77250-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"cafcf6be-d215-4fb1-b2fa-73e41ef77250\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.014204 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hxvv\" (UniqueName: \"kubernetes.io/projected/709420ac-3ac1-400d-9256-65483385afeb-kube-api-access-8hxvv\") pod \"nova-kuttl-scheduler-0\" (UID: \"709420ac-3ac1-400d-9256-65483385afeb\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.014236 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/989652e5-d96a-4ae1-b95c-ecea25dbae4e-logs\") pod \"nova-kuttl-api-0\" (UID: \"989652e5-d96a-4ae1-b95c-ecea25dbae4e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.014264 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmtqh\" (UniqueName: \"kubernetes.io/projected/989652e5-d96a-4ae1-b95c-ecea25dbae4e-kube-api-access-zmtqh\") pod \"nova-kuttl-api-0\" (UID: \"989652e5-d96a-4ae1-b95c-ecea25dbae4e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.014291 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/709420ac-3ac1-400d-9256-65483385afeb-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"709420ac-3ac1-400d-9256-65483385afeb\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.014324 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f6fg\" (UniqueName: \"kubernetes.io/projected/9de43c40-ccfa-41ef-ae49-bb23e5a71c45-kube-api-access-9f6fg\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"9de43c40-ccfa-41ef-ae49-bb23e5a71c45\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.014370 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9de43c40-ccfa-41ef-ae49-bb23e5a71c45-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"9de43c40-ccfa-41ef-ae49-bb23e5a71c45\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.014399 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbzd7\" (UniqueName: \"kubernetes.io/projected/cafcf6be-d215-4fb1-b2fa-73e41ef77250-kube-api-access-fbzd7\") pod \"nova-kuttl-metadata-0\" (UID: \"cafcf6be-d215-4fb1-b2fa-73e41ef77250\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.014422 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/989652e5-d96a-4ae1-b95c-ecea25dbae4e-config-data\") pod \"nova-kuttl-api-0\" (UID: \"989652e5-d96a-4ae1-b95c-ecea25dbae4e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.014778 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cafcf6be-d215-4fb1-b2fa-73e41ef77250-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"cafcf6be-d215-4fb1-b2fa-73e41ef77250\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.015625 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/989652e5-d96a-4ae1-b95c-ecea25dbae4e-logs\") pod \"nova-kuttl-api-0\" (UID: \"989652e5-d96a-4ae1-b95c-ecea25dbae4e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.020672 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cafcf6be-d215-4fb1-b2fa-73e41ef77250-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"cafcf6be-d215-4fb1-b2fa-73e41ef77250\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.022285 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/989652e5-d96a-4ae1-b95c-ecea25dbae4e-config-data\") pod \"nova-kuttl-api-0\" (UID: \"989652e5-d96a-4ae1-b95c-ecea25dbae4e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.040377 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbzd7\" (UniqueName: \"kubernetes.io/projected/cafcf6be-d215-4fb1-b2fa-73e41ef77250-kube-api-access-fbzd7\") pod \"nova-kuttl-metadata-0\" (UID: \"cafcf6be-d215-4fb1-b2fa-73e41ef77250\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.046021 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmtqh\" (UniqueName: \"kubernetes.io/projected/989652e5-d96a-4ae1-b95c-ecea25dbae4e-kube-api-access-zmtqh\") pod \"nova-kuttl-api-0\" (UID: \"989652e5-d96a-4ae1-b95c-ecea25dbae4e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.047914 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.116343 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hxvv\" (UniqueName: \"kubernetes.io/projected/709420ac-3ac1-400d-9256-65483385afeb-kube-api-access-8hxvv\") pod \"nova-kuttl-scheduler-0\" (UID: \"709420ac-3ac1-400d-9256-65483385afeb\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.119192 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/709420ac-3ac1-400d-9256-65483385afeb-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"709420ac-3ac1-400d-9256-65483385afeb\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.119243 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f6fg\" (UniqueName: \"kubernetes.io/projected/9de43c40-ccfa-41ef-ae49-bb23e5a71c45-kube-api-access-9f6fg\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"9de43c40-ccfa-41ef-ae49-bb23e5a71c45\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.119297 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9de43c40-ccfa-41ef-ae49-bb23e5a71c45-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"9de43c40-ccfa-41ef-ae49-bb23e5a71c45\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.128306 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9de43c40-ccfa-41ef-ae49-bb23e5a71c45-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"9de43c40-ccfa-41ef-ae49-bb23e5a71c45\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.130244 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/709420ac-3ac1-400d-9256-65483385afeb-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"709420ac-3ac1-400d-9256-65483385afeb\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.135499 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hxvv\" (UniqueName: \"kubernetes.io/projected/709420ac-3ac1-400d-9256-65483385afeb-kube-api-access-8hxvv\") pod \"nova-kuttl-scheduler-0\" (UID: \"709420ac-3ac1-400d-9256-65483385afeb\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.136950 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f6fg\" (UniqueName: \"kubernetes.io/projected/9de43c40-ccfa-41ef-ae49-bb23e5a71c45-kube-api-access-9f6fg\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"9de43c40-ccfa-41ef-ae49-bb23e5a71c45\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.195694 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.264688 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.363251 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.466295 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:34:32 crc kubenswrapper[4786]: E0127 13:34:32.466505 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.479883 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx"] Jan 27 13:34:32 crc kubenswrapper[4786]: W0127 13:34:32.502006 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode67e53fe_61a8_4a3d_aa25_75530bca5677.slice/crio-af143a514f149b284d59532c7c6b502144570da5779e4929678fbe7565bfb8f7 WatchSource:0}: Error finding container af143a514f149b284d59532c7c6b502144570da5779e4929678fbe7565bfb8f7: Status 404 returned error can't find the container with id af143a514f149b284d59532c7c6b502144570da5779e4929678fbe7565bfb8f7 Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.535136 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz"] Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.538828 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.541583 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.541888 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.547487 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz"] Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.597176 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.637764 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-4n2dz\" (UID: \"c21dee95-36c0-4f2a-983d-f7b8b8b383a9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.637815 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zb9w\" (UniqueName: \"kubernetes.io/projected/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-kube-api-access-5zb9w\") pod \"nova-kuttl-cell1-conductor-db-sync-4n2dz\" (UID: \"c21dee95-36c0-4f2a-983d-f7b8b8b383a9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.637912 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-4n2dz\" (UID: \"c21dee95-36c0-4f2a-983d-f7b8b8b383a9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.698103 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:34:32 crc kubenswrapper[4786]: W0127 13:34:32.700427 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcafcf6be_d215_4fb1_b2fa_73e41ef77250.slice/crio-5721f868b176f98c7ac5f24b30a4fcc68d6f8ff1ce9ea471a3780cfaf4edebb9 WatchSource:0}: Error finding container 5721f868b176f98c7ac5f24b30a4fcc68d6f8ff1ce9ea471a3780cfaf4edebb9: Status 404 returned error can't find the container with id 5721f868b176f98c7ac5f24b30a4fcc68d6f8ff1ce9ea471a3780cfaf4edebb9 Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.739318 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-4n2dz\" (UID: \"c21dee95-36c0-4f2a-983d-f7b8b8b383a9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.739474 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-4n2dz\" (UID: \"c21dee95-36c0-4f2a-983d-f7b8b8b383a9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.739511 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zb9w\" (UniqueName: \"kubernetes.io/projected/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-kube-api-access-5zb9w\") pod \"nova-kuttl-cell1-conductor-db-sync-4n2dz\" (UID: \"c21dee95-36c0-4f2a-983d-f7b8b8b383a9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.744194 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-4n2dz\" (UID: \"c21dee95-36c0-4f2a-983d-f7b8b8b383a9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.744222 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-4n2dz\" (UID: \"c21dee95-36c0-4f2a-983d-f7b8b8b383a9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.755329 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zb9w\" (UniqueName: \"kubernetes.io/projected/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-kube-api-access-5zb9w\") pod \"nova-kuttl-cell1-conductor-db-sync-4n2dz\" (UID: \"c21dee95-36c0-4f2a-983d-f7b8b8b383a9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.757455 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"cafcf6be-d215-4fb1-b2fa-73e41ef77250","Type":"ContainerStarted","Data":"5721f868b176f98c7ac5f24b30a4fcc68d6f8ff1ce9ea471a3780cfaf4edebb9"} Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.758935 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"989652e5-d96a-4ae1-b95c-ecea25dbae4e","Type":"ContainerStarted","Data":"9cc8d26421be744bee758dfb059695904d8536ab6ed9af06ff5250457064096a"} Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.760984 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" event={"ID":"e67e53fe-61a8-4a3d-aa25-75530bca5677","Type":"ContainerStarted","Data":"af143a514f149b284d59532c7c6b502144570da5779e4929678fbe7565bfb8f7"} Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.787145 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.857639 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" Jan 27 13:34:32 crc kubenswrapper[4786]: I0127 13:34:32.879049 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:34:32 crc kubenswrapper[4786]: W0127 13:34:32.882806 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod709420ac_3ac1_400d_9256_65483385afeb.slice/crio-906fc5bb4caea5be07c5bb019e2f5551b17a240c2224cd383b0440309cf67cf4 WatchSource:0}: Error finding container 906fc5bb4caea5be07c5bb019e2f5551b17a240c2224cd383b0440309cf67cf4: Status 404 returned error can't find the container with id 906fc5bb4caea5be07c5bb019e2f5551b17a240c2224cd383b0440309cf67cf4 Jan 27 13:34:32 crc kubenswrapper[4786]: W0127 13:34:32.926428 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9de43c40_ccfa_41ef_ae49_bb23e5a71c45.slice/crio-1678d1fc08d9bc95f18153edc23ee2acce9a4feba9da24d25b3ea738dc5591b1 WatchSource:0}: Error finding container 1678d1fc08d9bc95f18153edc23ee2acce9a4feba9da24d25b3ea738dc5591b1: Status 404 returned error can't find the container with id 1678d1fc08d9bc95f18153edc23ee2acce9a4feba9da24d25b3ea738dc5591b1 Jan 27 13:34:33 crc kubenswrapper[4786]: I0127 13:34:33.405448 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz"] Jan 27 13:34:33 crc kubenswrapper[4786]: I0127 13:34:33.769743 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"709420ac-3ac1-400d-9256-65483385afeb","Type":"ContainerStarted","Data":"906fc5bb4caea5be07c5bb019e2f5551b17a240c2224cd383b0440309cf67cf4"} Jan 27 13:34:33 crc kubenswrapper[4786]: I0127 13:34:33.771266 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" event={"ID":"e67e53fe-61a8-4a3d-aa25-75530bca5677","Type":"ContainerStarted","Data":"73922993335a58cfa0e71a3a13cf37d6d984409db010bb4141672ea3810b86c9"} Jan 27 13:34:33 crc kubenswrapper[4786]: I0127 13:34:33.772676 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" event={"ID":"c21dee95-36c0-4f2a-983d-f7b8b8b383a9","Type":"ContainerStarted","Data":"0c9f77d845f4977c58cca2e1827ee2a8d5e97f4fcd391a025a91eb2e661f64bd"} Jan 27 13:34:33 crc kubenswrapper[4786]: I0127 13:34:33.773699 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"9de43c40-ccfa-41ef-ae49-bb23e5a71c45","Type":"ContainerStarted","Data":"1678d1fc08d9bc95f18153edc23ee2acce9a4feba9da24d25b3ea738dc5591b1"} Jan 27 13:34:34 crc kubenswrapper[4786]: I0127 13:34:34.783182 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"989652e5-d96a-4ae1-b95c-ecea25dbae4e","Type":"ContainerStarted","Data":"ecef8bf18aadd498025e5a36e1ad6435e794efe39cbbdc30073a2d53d0862373"} Jan 27 13:34:34 crc kubenswrapper[4786]: I0127 13:34:34.784303 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"989652e5-d96a-4ae1-b95c-ecea25dbae4e","Type":"ContainerStarted","Data":"4b96c36d6a72849123bcc80fe16fb9324b0f3f88c937d88665b6ff353187964a"} Jan 27 13:34:34 crc kubenswrapper[4786]: I0127 13:34:34.796876 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"cafcf6be-d215-4fb1-b2fa-73e41ef77250","Type":"ContainerStarted","Data":"7e2f8f8cdf6b3a9f471989cec95101c69e85a025333ea25549a0a3728c3fc9b5"} Jan 27 13:34:34 crc kubenswrapper[4786]: I0127 13:34:34.797095 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"cafcf6be-d215-4fb1-b2fa-73e41ef77250","Type":"ContainerStarted","Data":"6d757a40a0a7477e67d0d804fd2434aaac9cd2261b538324f5bce8aeb32683ab"} Jan 27 13:34:34 crc kubenswrapper[4786]: I0127 13:34:34.798877 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" event={"ID":"c21dee95-36c0-4f2a-983d-f7b8b8b383a9","Type":"ContainerStarted","Data":"ead088a3d806480e2ba8b3a9e8104ee8564a35df411737e783d68946d6a63597"} Jan 27 13:34:34 crc kubenswrapper[4786]: I0127 13:34:34.800328 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"9de43c40-ccfa-41ef-ae49-bb23e5a71c45","Type":"ContainerStarted","Data":"fda910f767d34ceca6fd886b75c30a05806b9bcf8a84b4fc3bbd8530d881cdc2"} Jan 27 13:34:34 crc kubenswrapper[4786]: I0127 13:34:34.804868 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"709420ac-3ac1-400d-9256-65483385afeb","Type":"ContainerStarted","Data":"6680561aabaccdac491f6edd8cb0f304538ad385c67391f99b5d30d0cbf7d148"} Jan 27 13:34:34 crc kubenswrapper[4786]: I0127 13:34:34.807430 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=3.807403694 podStartE2EDuration="3.807403694s" podCreationTimestamp="2026-01-27 13:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:34:34.801556533 +0000 UTC m=+1658.012170652" watchObservedRunningTime="2026-01-27 13:34:34.807403694 +0000 UTC m=+1658.018017823" Jan 27 13:34:34 crc kubenswrapper[4786]: I0127 13:34:34.850637 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" podStartSLOduration=2.850598484 podStartE2EDuration="2.850598484s" podCreationTimestamp="2026-01-27 13:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:34:34.839276585 +0000 UTC m=+1658.049890714" watchObservedRunningTime="2026-01-27 13:34:34.850598484 +0000 UTC m=+1658.061212613" Jan 27 13:34:34 crc kubenswrapper[4786]: I0127 13:34:34.896986 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=3.896943841 podStartE2EDuration="3.896943841s" podCreationTimestamp="2026-01-27 13:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:34:34.868973177 +0000 UTC m=+1658.079587316" watchObservedRunningTime="2026-01-27 13:34:34.896943841 +0000 UTC m=+1658.107557960" Jan 27 13:34:34 crc kubenswrapper[4786]: I0127 13:34:34.903930 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=3.903909132 podStartE2EDuration="3.903909132s" podCreationTimestamp="2026-01-27 13:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:34:34.889310593 +0000 UTC m=+1658.099924712" watchObservedRunningTime="2026-01-27 13:34:34.903909132 +0000 UTC m=+1658.114523261" Jan 27 13:34:34 crc kubenswrapper[4786]: I0127 13:34:34.922752 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" podStartSLOduration=3.922727196 podStartE2EDuration="3.922727196s" podCreationTimestamp="2026-01-27 13:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:34:34.90789122 +0000 UTC m=+1658.118505329" watchObservedRunningTime="2026-01-27 13:34:34.922727196 +0000 UTC m=+1658.133341315" Jan 27 13:34:35 crc kubenswrapper[4786]: I0127 13:34:35.830525 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=4.830507784 podStartE2EDuration="4.830507784s" podCreationTimestamp="2026-01-27 13:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:34:35.830045271 +0000 UTC m=+1659.040659390" watchObservedRunningTime="2026-01-27 13:34:35.830507784 +0000 UTC m=+1659.041121893" Jan 27 13:34:37 crc kubenswrapper[4786]: I0127 13:34:37.195914 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:37 crc kubenswrapper[4786]: I0127 13:34:37.196655 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:37 crc kubenswrapper[4786]: I0127 13:34:37.265144 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:37 crc kubenswrapper[4786]: I0127 13:34:37.363998 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:34:37 crc kubenswrapper[4786]: I0127 13:34:37.829893 4786 generic.go:334] "Generic (PLEG): container finished" podID="c21dee95-36c0-4f2a-983d-f7b8b8b383a9" containerID="ead088a3d806480e2ba8b3a9e8104ee8564a35df411737e783d68946d6a63597" exitCode=0 Jan 27 13:34:37 crc kubenswrapper[4786]: I0127 13:34:37.829944 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" event={"ID":"c21dee95-36c0-4f2a-983d-f7b8b8b383a9","Type":"ContainerDied","Data":"ead088a3d806480e2ba8b3a9e8104ee8564a35df411737e783d68946d6a63597"} Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.174046 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.347545 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-config-data\") pod \"c21dee95-36c0-4f2a-983d-f7b8b8b383a9\" (UID: \"c21dee95-36c0-4f2a-983d-f7b8b8b383a9\") " Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.347627 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-scripts\") pod \"c21dee95-36c0-4f2a-983d-f7b8b8b383a9\" (UID: \"c21dee95-36c0-4f2a-983d-f7b8b8b383a9\") " Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.347700 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zb9w\" (UniqueName: \"kubernetes.io/projected/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-kube-api-access-5zb9w\") pod \"c21dee95-36c0-4f2a-983d-f7b8b8b383a9\" (UID: \"c21dee95-36c0-4f2a-983d-f7b8b8b383a9\") " Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.354849 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-scripts" (OuterVolumeSpecName: "scripts") pod "c21dee95-36c0-4f2a-983d-f7b8b8b383a9" (UID: "c21dee95-36c0-4f2a-983d-f7b8b8b383a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.354891 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-kube-api-access-5zb9w" (OuterVolumeSpecName: "kube-api-access-5zb9w") pod "c21dee95-36c0-4f2a-983d-f7b8b8b383a9" (UID: "c21dee95-36c0-4f2a-983d-f7b8b8b383a9"). InnerVolumeSpecName "kube-api-access-5zb9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.372288 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-config-data" (OuterVolumeSpecName: "config-data") pod "c21dee95-36c0-4f2a-983d-f7b8b8b383a9" (UID: "c21dee95-36c0-4f2a-983d-f7b8b8b383a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.450769 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.450810 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zb9w\" (UniqueName: \"kubernetes.io/projected/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-kube-api-access-5zb9w\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.450822 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c21dee95-36c0-4f2a-983d-f7b8b8b383a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.872386 4786 generic.go:334] "Generic (PLEG): container finished" podID="e67e53fe-61a8-4a3d-aa25-75530bca5677" containerID="73922993335a58cfa0e71a3a13cf37d6d984409db010bb4141672ea3810b86c9" exitCode=0 Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.872474 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" event={"ID":"e67e53fe-61a8-4a3d-aa25-75530bca5677","Type":"ContainerDied","Data":"73922993335a58cfa0e71a3a13cf37d6d984409db010bb4141672ea3810b86c9"} Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.874628 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" event={"ID":"c21dee95-36c0-4f2a-983d-f7b8b8b383a9","Type":"ContainerDied","Data":"0c9f77d845f4977c58cca2e1827ee2a8d5e97f4fcd391a025a91eb2e661f64bd"} Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.874666 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c9f77d845f4977c58cca2e1827ee2a8d5e97f4fcd391a025a91eb2e661f64bd" Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.874684 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz" Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.938055 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:34:39 crc kubenswrapper[4786]: E0127 13:34:39.938648 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21dee95-36c0-4f2a-983d-f7b8b8b383a9" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.938714 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21dee95-36c0-4f2a-983d-f7b8b8b383a9" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.938935 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21dee95-36c0-4f2a-983d-f7b8b8b383a9" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.939566 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.942322 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 27 13:34:39 crc kubenswrapper[4786]: I0127 13:34:39.957943 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:34:40 crc kubenswrapper[4786]: I0127 13:34:40.062940 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxdxw\" (UniqueName: \"kubernetes.io/projected/d62b7637-2d50-41c6-8aaf-5f741b49e241-kube-api-access-sxdxw\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"d62b7637-2d50-41c6-8aaf-5f741b49e241\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:34:40 crc kubenswrapper[4786]: I0127 13:34:40.063085 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d62b7637-2d50-41c6-8aaf-5f741b49e241-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"d62b7637-2d50-41c6-8aaf-5f741b49e241\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:34:40 crc kubenswrapper[4786]: I0127 13:34:40.164082 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d62b7637-2d50-41c6-8aaf-5f741b49e241-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"d62b7637-2d50-41c6-8aaf-5f741b49e241\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:34:40 crc kubenswrapper[4786]: I0127 13:34:40.164207 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxdxw\" (UniqueName: \"kubernetes.io/projected/d62b7637-2d50-41c6-8aaf-5f741b49e241-kube-api-access-sxdxw\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"d62b7637-2d50-41c6-8aaf-5f741b49e241\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:34:40 crc kubenswrapper[4786]: I0127 13:34:40.178580 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d62b7637-2d50-41c6-8aaf-5f741b49e241-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"d62b7637-2d50-41c6-8aaf-5f741b49e241\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:34:40 crc kubenswrapper[4786]: I0127 13:34:40.181233 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxdxw\" (UniqueName: \"kubernetes.io/projected/d62b7637-2d50-41c6-8aaf-5f741b49e241-kube-api-access-sxdxw\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"d62b7637-2d50-41c6-8aaf-5f741b49e241\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:34:40 crc kubenswrapper[4786]: I0127 13:34:40.259057 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:34:40 crc kubenswrapper[4786]: I0127 13:34:40.694358 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:34:40 crc kubenswrapper[4786]: W0127 13:34:40.695993 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd62b7637_2d50_41c6_8aaf_5f741b49e241.slice/crio-7f99f03ffdb6db5905f2ffe89db57cf1540efc0681f825c0925ce29db5ae014d WatchSource:0}: Error finding container 7f99f03ffdb6db5905f2ffe89db57cf1540efc0681f825c0925ce29db5ae014d: Status 404 returned error can't find the container with id 7f99f03ffdb6db5905f2ffe89db57cf1540efc0681f825c0925ce29db5ae014d Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:40.883832 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"d62b7637-2d50-41c6-8aaf-5f741b49e241","Type":"ContainerStarted","Data":"7f99f03ffdb6db5905f2ffe89db57cf1540efc0681f825c0925ce29db5ae014d"} Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:41.331583 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:41.415264 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e67e53fe-61a8-4a3d-aa25-75530bca5677-scripts\") pod \"e67e53fe-61a8-4a3d-aa25-75530bca5677\" (UID: \"e67e53fe-61a8-4a3d-aa25-75530bca5677\") " Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:41.415354 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67e53fe-61a8-4a3d-aa25-75530bca5677-config-data\") pod \"e67e53fe-61a8-4a3d-aa25-75530bca5677\" (UID: \"e67e53fe-61a8-4a3d-aa25-75530bca5677\") " Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:41.415447 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7hx6\" (UniqueName: \"kubernetes.io/projected/e67e53fe-61a8-4a3d-aa25-75530bca5677-kube-api-access-z7hx6\") pod \"e67e53fe-61a8-4a3d-aa25-75530bca5677\" (UID: \"e67e53fe-61a8-4a3d-aa25-75530bca5677\") " Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:41.419071 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67e53fe-61a8-4a3d-aa25-75530bca5677-scripts" (OuterVolumeSpecName: "scripts") pod "e67e53fe-61a8-4a3d-aa25-75530bca5677" (UID: "e67e53fe-61a8-4a3d-aa25-75530bca5677"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:41.419764 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e67e53fe-61a8-4a3d-aa25-75530bca5677-kube-api-access-z7hx6" (OuterVolumeSpecName: "kube-api-access-z7hx6") pod "e67e53fe-61a8-4a3d-aa25-75530bca5677" (UID: "e67e53fe-61a8-4a3d-aa25-75530bca5677"). InnerVolumeSpecName "kube-api-access-z7hx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:41.436184 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67e53fe-61a8-4a3d-aa25-75530bca5677-config-data" (OuterVolumeSpecName: "config-data") pod "e67e53fe-61a8-4a3d-aa25-75530bca5677" (UID: "e67e53fe-61a8-4a3d-aa25-75530bca5677"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:41.516651 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7hx6\" (UniqueName: \"kubernetes.io/projected/e67e53fe-61a8-4a3d-aa25-75530bca5677-kube-api-access-z7hx6\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:41.516689 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e67e53fe-61a8-4a3d-aa25-75530bca5677-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:41.516701 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67e53fe-61a8-4a3d-aa25-75530bca5677-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:41.898538 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"d62b7637-2d50-41c6-8aaf-5f741b49e241","Type":"ContainerStarted","Data":"a88465c939ad7b809b684a203530f4aa809458dffe9b0256f104de7cecbccb34"} Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:41.899179 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:41.900825 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" event={"ID":"e67e53fe-61a8-4a3d-aa25-75530bca5677","Type":"ContainerDied","Data":"af143a514f149b284d59532c7c6b502144570da5779e4929678fbe7565bfb8f7"} Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:41.900926 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx" Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:41.900940 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af143a514f149b284d59532c7c6b502144570da5779e4929678fbe7565bfb8f7" Jan 27 13:34:41 crc kubenswrapper[4786]: I0127 13:34:41.926505 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.9264716589999997 podStartE2EDuration="2.926471659s" podCreationTimestamp="2026-01-27 13:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:34:41.921141893 +0000 UTC m=+1665.131756012" watchObservedRunningTime="2026-01-27 13:34:41.926471659 +0000 UTC m=+1665.137085788" Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.049176 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.049241 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.069701 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.114529 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.114859 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="709420ac-3ac1-400d-9256-65483385afeb" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://6680561aabaccdac491f6edd8cb0f304538ad385c67391f99b5d30d0cbf7d148" gracePeriod=30 Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.147133 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.147696 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="cafcf6be-d215-4fb1-b2fa-73e41ef77250" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://7e2f8f8cdf6b3a9f471989cec95101c69e85a025333ea25549a0a3728c3fc9b5" gracePeriod=30 Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.147469 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="cafcf6be-d215-4fb1-b2fa-73e41ef77250" containerName="nova-kuttl-metadata-log" containerID="cri-o://6d757a40a0a7477e67d0d804fd2434aaac9cd2261b538324f5bce8aeb32683ab" gracePeriod=30 Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.364902 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.375683 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.912648 4786 generic.go:334] "Generic (PLEG): container finished" podID="cafcf6be-d215-4fb1-b2fa-73e41ef77250" containerID="7e2f8f8cdf6b3a9f471989cec95101c69e85a025333ea25549a0a3728c3fc9b5" exitCode=0 Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.912930 4786 generic.go:334] "Generic (PLEG): container finished" podID="cafcf6be-d215-4fb1-b2fa-73e41ef77250" containerID="6d757a40a0a7477e67d0d804fd2434aaac9cd2261b538324f5bce8aeb32683ab" exitCode=143 Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.914053 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"cafcf6be-d215-4fb1-b2fa-73e41ef77250","Type":"ContainerDied","Data":"7e2f8f8cdf6b3a9f471989cec95101c69e85a025333ea25549a0a3728c3fc9b5"} Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.914157 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"cafcf6be-d215-4fb1-b2fa-73e41ef77250","Type":"ContainerDied","Data":"6d757a40a0a7477e67d0d804fd2434aaac9cd2261b538324f5bce8aeb32683ab"} Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.914217 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="989652e5-d96a-4ae1-b95c-ecea25dbae4e" containerName="nova-kuttl-api-log" containerID="cri-o://4b96c36d6a72849123bcc80fe16fb9324b0f3f88c937d88665b6ff353187964a" gracePeriod=30 Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.914314 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="989652e5-d96a-4ae1-b95c-ecea25dbae4e" containerName="nova-kuttl-api-api" containerID="cri-o://ecef8bf18aadd498025e5a36e1ad6435e794efe39cbbdc30073a2d53d0862373" gracePeriod=30 Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.922671 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="989652e5-d96a-4ae1-b95c-ecea25dbae4e" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.156:8774/\": EOF" Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.922848 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="989652e5-d96a-4ae1-b95c-ecea25dbae4e" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.156:8774/\": EOF" Jan 27 13:34:42 crc kubenswrapper[4786]: I0127 13:34:42.931256 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.273312 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.447411 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbzd7\" (UniqueName: \"kubernetes.io/projected/cafcf6be-d215-4fb1-b2fa-73e41ef77250-kube-api-access-fbzd7\") pod \"cafcf6be-d215-4fb1-b2fa-73e41ef77250\" (UID: \"cafcf6be-d215-4fb1-b2fa-73e41ef77250\") " Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.447827 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cafcf6be-d215-4fb1-b2fa-73e41ef77250-logs\") pod \"cafcf6be-d215-4fb1-b2fa-73e41ef77250\" (UID: \"cafcf6be-d215-4fb1-b2fa-73e41ef77250\") " Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.447883 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cafcf6be-d215-4fb1-b2fa-73e41ef77250-config-data\") pod \"cafcf6be-d215-4fb1-b2fa-73e41ef77250\" (UID: \"cafcf6be-d215-4fb1-b2fa-73e41ef77250\") " Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.449877 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cafcf6be-d215-4fb1-b2fa-73e41ef77250-logs" (OuterVolumeSpecName: "logs") pod "cafcf6be-d215-4fb1-b2fa-73e41ef77250" (UID: "cafcf6be-d215-4fb1-b2fa-73e41ef77250"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.453701 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cafcf6be-d215-4fb1-b2fa-73e41ef77250-kube-api-access-fbzd7" (OuterVolumeSpecName: "kube-api-access-fbzd7") pod "cafcf6be-d215-4fb1-b2fa-73e41ef77250" (UID: "cafcf6be-d215-4fb1-b2fa-73e41ef77250"). InnerVolumeSpecName "kube-api-access-fbzd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.474004 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cafcf6be-d215-4fb1-b2fa-73e41ef77250-config-data" (OuterVolumeSpecName: "config-data") pod "cafcf6be-d215-4fb1-b2fa-73e41ef77250" (UID: "cafcf6be-d215-4fb1-b2fa-73e41ef77250"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.550061 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbzd7\" (UniqueName: \"kubernetes.io/projected/cafcf6be-d215-4fb1-b2fa-73e41ef77250-kube-api-access-fbzd7\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.550103 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cafcf6be-d215-4fb1-b2fa-73e41ef77250-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.550112 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cafcf6be-d215-4fb1-b2fa-73e41ef77250-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.922564 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"cafcf6be-d215-4fb1-b2fa-73e41ef77250","Type":"ContainerDied","Data":"5721f868b176f98c7ac5f24b30a4fcc68d6f8ff1ce9ea471a3780cfaf4edebb9"} Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.922642 4786 scope.go:117] "RemoveContainer" containerID="7e2f8f8cdf6b3a9f471989cec95101c69e85a025333ea25549a0a3728c3fc9b5" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.923017 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.928428 4786 generic.go:334] "Generic (PLEG): container finished" podID="989652e5-d96a-4ae1-b95c-ecea25dbae4e" containerID="4b96c36d6a72849123bcc80fe16fb9324b0f3f88c937d88665b6ff353187964a" exitCode=143 Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.929022 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"989652e5-d96a-4ae1-b95c-ecea25dbae4e","Type":"ContainerDied","Data":"4b96c36d6a72849123bcc80fe16fb9324b0f3f88c937d88665b6ff353187964a"} Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.945866 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.961865 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.975757 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:34:43 crc kubenswrapper[4786]: E0127 13:34:43.976182 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafcf6be-d215-4fb1-b2fa-73e41ef77250" containerName="nova-kuttl-metadata-log" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.976200 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafcf6be-d215-4fb1-b2fa-73e41ef77250" containerName="nova-kuttl-metadata-log" Jan 27 13:34:43 crc kubenswrapper[4786]: E0127 13:34:43.976215 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67e53fe-61a8-4a3d-aa25-75530bca5677" containerName="nova-manage" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.976224 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67e53fe-61a8-4a3d-aa25-75530bca5677" containerName="nova-manage" Jan 27 13:34:43 crc kubenswrapper[4786]: E0127 13:34:43.976249 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafcf6be-d215-4fb1-b2fa-73e41ef77250" containerName="nova-kuttl-metadata-metadata" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.976258 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafcf6be-d215-4fb1-b2fa-73e41ef77250" containerName="nova-kuttl-metadata-metadata" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.976430 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="cafcf6be-d215-4fb1-b2fa-73e41ef77250" containerName="nova-kuttl-metadata-log" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.976450 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="cafcf6be-d215-4fb1-b2fa-73e41ef77250" containerName="nova-kuttl-metadata-metadata" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.976465 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e67e53fe-61a8-4a3d-aa25-75530bca5677" containerName="nova-manage" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.977412 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.981040 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.984339 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:34:43 crc kubenswrapper[4786]: I0127 13:34:43.986391 4786 scope.go:117] "RemoveContainer" containerID="6d757a40a0a7477e67d0d804fd2434aaac9cd2261b538324f5bce8aeb32683ab" Jan 27 13:34:44 crc kubenswrapper[4786]: I0127 13:34:44.158515 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkv9d\" (UniqueName: \"kubernetes.io/projected/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-kube-api-access-vkv9d\") pod \"nova-kuttl-metadata-0\" (UID: \"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:44 crc kubenswrapper[4786]: I0127 13:34:44.158580 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:44 crc kubenswrapper[4786]: I0127 13:34:44.158646 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:44 crc kubenswrapper[4786]: I0127 13:34:44.260343 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkv9d\" (UniqueName: \"kubernetes.io/projected/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-kube-api-access-vkv9d\") pod \"nova-kuttl-metadata-0\" (UID: \"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:44 crc kubenswrapper[4786]: I0127 13:34:44.260396 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:44 crc kubenswrapper[4786]: I0127 13:34:44.260427 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:44 crc kubenswrapper[4786]: I0127 13:34:44.261339 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:44 crc kubenswrapper[4786]: I0127 13:34:44.263980 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:44 crc kubenswrapper[4786]: I0127 13:34:44.292656 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkv9d\" (UniqueName: \"kubernetes.io/projected/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-kube-api-access-vkv9d\") pod \"nova-kuttl-metadata-0\" (UID: \"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:44 crc kubenswrapper[4786]: I0127 13:34:44.305490 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:44 crc kubenswrapper[4786]: I0127 13:34:44.465424 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:34:44 crc kubenswrapper[4786]: E0127 13:34:44.466030 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:34:44 crc kubenswrapper[4786]: I0127 13:34:44.739752 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:34:44 crc kubenswrapper[4786]: I0127 13:34:44.942156 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1","Type":"ContainerStarted","Data":"51104a330730b609dc5fbd534c125cd3e02d083da8d39df1235205d2bdf30b60"} Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.287326 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.477691 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cafcf6be-d215-4fb1-b2fa-73e41ef77250" path="/var/lib/kubelet/pods/cafcf6be-d215-4fb1-b2fa-73e41ef77250/volumes" Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.690113 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z"] Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.694150 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.709023 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.717722 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.732227 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z"] Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.783130 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9c38f7-ea97-4faa-922f-a3087cee1b21-config-data\") pod \"nova-kuttl-cell1-cell-mapping-mnd7z\" (UID: \"da9c38f7-ea97-4faa-922f-a3087cee1b21\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.783184 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da9c38f7-ea97-4faa-922f-a3087cee1b21-scripts\") pod \"nova-kuttl-cell1-cell-mapping-mnd7z\" (UID: \"da9c38f7-ea97-4faa-922f-a3087cee1b21\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.783576 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9htt\" (UniqueName: \"kubernetes.io/projected/da9c38f7-ea97-4faa-922f-a3087cee1b21-kube-api-access-q9htt\") pod \"nova-kuttl-cell1-cell-mapping-mnd7z\" (UID: \"da9c38f7-ea97-4faa-922f-a3087cee1b21\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.885043 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9c38f7-ea97-4faa-922f-a3087cee1b21-config-data\") pod \"nova-kuttl-cell1-cell-mapping-mnd7z\" (UID: \"da9c38f7-ea97-4faa-922f-a3087cee1b21\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.885086 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da9c38f7-ea97-4faa-922f-a3087cee1b21-scripts\") pod \"nova-kuttl-cell1-cell-mapping-mnd7z\" (UID: \"da9c38f7-ea97-4faa-922f-a3087cee1b21\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.885200 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9htt\" (UniqueName: \"kubernetes.io/projected/da9c38f7-ea97-4faa-922f-a3087cee1b21-kube-api-access-q9htt\") pod \"nova-kuttl-cell1-cell-mapping-mnd7z\" (UID: \"da9c38f7-ea97-4faa-922f-a3087cee1b21\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.897849 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da9c38f7-ea97-4faa-922f-a3087cee1b21-scripts\") pod \"nova-kuttl-cell1-cell-mapping-mnd7z\" (UID: \"da9c38f7-ea97-4faa-922f-a3087cee1b21\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.897972 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9c38f7-ea97-4faa-922f-a3087cee1b21-config-data\") pod \"nova-kuttl-cell1-cell-mapping-mnd7z\" (UID: \"da9c38f7-ea97-4faa-922f-a3087cee1b21\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.914234 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9htt\" (UniqueName: \"kubernetes.io/projected/da9c38f7-ea97-4faa-922f-a3087cee1b21-kube-api-access-q9htt\") pod \"nova-kuttl-cell1-cell-mapping-mnd7z\" (UID: \"da9c38f7-ea97-4faa-922f-a3087cee1b21\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.952620 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1","Type":"ContainerStarted","Data":"e63bce6daf2cf42b8c3e22a52e75ae6ea62e00717c66ade206c71317a201a851"} Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.952686 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1","Type":"ContainerStarted","Data":"75a5cd3bdf78b620c4be2a5f6bc59b54e25a25cd522e3be3fa7db022268ef66d"} Jan 27 13:34:45 crc kubenswrapper[4786]: I0127 13:34:45.969236 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.9692177920000002 podStartE2EDuration="2.969217792s" podCreationTimestamp="2026-01-27 13:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:34:45.967929287 +0000 UTC m=+1669.178543406" watchObservedRunningTime="2026-01-27 13:34:45.969217792 +0000 UTC m=+1669.179831911" Jan 27 13:34:46 crc kubenswrapper[4786]: I0127 13:34:46.019885 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" Jan 27 13:34:46 crc kubenswrapper[4786]: I0127 13:34:46.442217 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z"] Jan 27 13:34:46 crc kubenswrapper[4786]: W0127 13:34:46.444598 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda9c38f7_ea97_4faa_922f_a3087cee1b21.slice/crio-b15996921878512eb4a55591bdfeff049ec86a7ab82e24d53ec8fe33039e07b9 WatchSource:0}: Error finding container b15996921878512eb4a55591bdfeff049ec86a7ab82e24d53ec8fe33039e07b9: Status 404 returned error can't find the container with id b15996921878512eb4a55591bdfeff049ec86a7ab82e24d53ec8fe33039e07b9 Jan 27 13:34:46 crc kubenswrapper[4786]: I0127 13:34:46.965013 4786 generic.go:334] "Generic (PLEG): container finished" podID="709420ac-3ac1-400d-9256-65483385afeb" containerID="6680561aabaccdac491f6edd8cb0f304538ad385c67391f99b5d30d0cbf7d148" exitCode=0 Jan 27 13:34:46 crc kubenswrapper[4786]: I0127 13:34:46.965111 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"709420ac-3ac1-400d-9256-65483385afeb","Type":"ContainerDied","Data":"6680561aabaccdac491f6edd8cb0f304538ad385c67391f99b5d30d0cbf7d148"} Jan 27 13:34:46 crc kubenswrapper[4786]: I0127 13:34:46.967998 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" event={"ID":"da9c38f7-ea97-4faa-922f-a3087cee1b21","Type":"ContainerStarted","Data":"b15996921878512eb4a55591bdfeff049ec86a7ab82e24d53ec8fe33039e07b9"} Jan 27 13:34:47 crc kubenswrapper[4786]: I0127 13:34:47.329113 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:47 crc kubenswrapper[4786]: I0127 13:34:47.514366 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hxvv\" (UniqueName: \"kubernetes.io/projected/709420ac-3ac1-400d-9256-65483385afeb-kube-api-access-8hxvv\") pod \"709420ac-3ac1-400d-9256-65483385afeb\" (UID: \"709420ac-3ac1-400d-9256-65483385afeb\") " Jan 27 13:34:47 crc kubenswrapper[4786]: I0127 13:34:47.514539 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/709420ac-3ac1-400d-9256-65483385afeb-config-data\") pod \"709420ac-3ac1-400d-9256-65483385afeb\" (UID: \"709420ac-3ac1-400d-9256-65483385afeb\") " Jan 27 13:34:47 crc kubenswrapper[4786]: I0127 13:34:47.519053 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/709420ac-3ac1-400d-9256-65483385afeb-kube-api-access-8hxvv" (OuterVolumeSpecName: "kube-api-access-8hxvv") pod "709420ac-3ac1-400d-9256-65483385afeb" (UID: "709420ac-3ac1-400d-9256-65483385afeb"). InnerVolumeSpecName "kube-api-access-8hxvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:47 crc kubenswrapper[4786]: I0127 13:34:47.536915 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709420ac-3ac1-400d-9256-65483385afeb-config-data" (OuterVolumeSpecName: "config-data") pod "709420ac-3ac1-400d-9256-65483385afeb" (UID: "709420ac-3ac1-400d-9256-65483385afeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:47 crc kubenswrapper[4786]: I0127 13:34:47.619193 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hxvv\" (UniqueName: \"kubernetes.io/projected/709420ac-3ac1-400d-9256-65483385afeb-kube-api-access-8hxvv\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:47 crc kubenswrapper[4786]: I0127 13:34:47.619229 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/709420ac-3ac1-400d-9256-65483385afeb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:47 crc kubenswrapper[4786]: I0127 13:34:47.976656 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"709420ac-3ac1-400d-9256-65483385afeb","Type":"ContainerDied","Data":"906fc5bb4caea5be07c5bb019e2f5551b17a240c2224cd383b0440309cf67cf4"} Jan 27 13:34:47 crc kubenswrapper[4786]: I0127 13:34:47.976692 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:47 crc kubenswrapper[4786]: I0127 13:34:47.976701 4786 scope.go:117] "RemoveContainer" containerID="6680561aabaccdac491f6edd8cb0f304538ad385c67391f99b5d30d0cbf7d148" Jan 27 13:34:47 crc kubenswrapper[4786]: I0127 13:34:47.978142 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" event={"ID":"da9c38f7-ea97-4faa-922f-a3087cee1b21","Type":"ContainerStarted","Data":"5271c694b2af73ba565f0638988720da96c7ed728357af14050c55179f143e5a"} Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.004822 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" podStartSLOduration=3.004787572 podStartE2EDuration="3.004787572s" podCreationTimestamp="2026-01-27 13:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:34:48.004341939 +0000 UTC m=+1671.214956058" watchObservedRunningTime="2026-01-27 13:34:48.004787572 +0000 UTC m=+1671.215401691" Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.022742 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.029377 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.049755 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:48 crc kubenswrapper[4786]: E0127 13:34:48.050090 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709420ac-3ac1-400d-9256-65483385afeb" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.050106 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="709420ac-3ac1-400d-9256-65483385afeb" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.050252 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="709420ac-3ac1-400d-9256-65483385afeb" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.050787 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.060191 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.061007 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.229986 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6jgr\" (UniqueName: \"kubernetes.io/projected/2d84fda9-8996-4d17-84f4-df4479ed4ba2-kube-api-access-r6jgr\") pod \"nova-kuttl-scheduler-0\" (UID: \"2d84fda9-8996-4d17-84f4-df4479ed4ba2\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.230438 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d84fda9-8996-4d17-84f4-df4479ed4ba2-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"2d84fda9-8996-4d17-84f4-df4479ed4ba2\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.332944 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6jgr\" (UniqueName: \"kubernetes.io/projected/2d84fda9-8996-4d17-84f4-df4479ed4ba2-kube-api-access-r6jgr\") pod \"nova-kuttl-scheduler-0\" (UID: \"2d84fda9-8996-4d17-84f4-df4479ed4ba2\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.333113 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d84fda9-8996-4d17-84f4-df4479ed4ba2-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"2d84fda9-8996-4d17-84f4-df4479ed4ba2\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.339004 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d84fda9-8996-4d17-84f4-df4479ed4ba2-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"2d84fda9-8996-4d17-84f4-df4479ed4ba2\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.354161 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6jgr\" (UniqueName: \"kubernetes.io/projected/2d84fda9-8996-4d17-84f4-df4479ed4ba2-kube-api-access-r6jgr\") pod \"nova-kuttl-scheduler-0\" (UID: \"2d84fda9-8996-4d17-84f4-df4479ed4ba2\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.372984 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:48 crc kubenswrapper[4786]: I0127 13:34:48.827043 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:48 crc kubenswrapper[4786]: W0127 13:34:48.831870 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d84fda9_8996_4d17_84f4_df4479ed4ba2.slice/crio-1bc7500955a7c7c6d77d47aad04498e483de3ca0f12af3e69e482b4974ad7c99 WatchSource:0}: Error finding container 1bc7500955a7c7c6d77d47aad04498e483de3ca0f12af3e69e482b4974ad7c99: Status 404 returned error can't find the container with id 1bc7500955a7c7c6d77d47aad04498e483de3ca0f12af3e69e482b4974ad7c99 Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.007135 4786 generic.go:334] "Generic (PLEG): container finished" podID="989652e5-d96a-4ae1-b95c-ecea25dbae4e" containerID="ecef8bf18aadd498025e5a36e1ad6435e794efe39cbbdc30073a2d53d0862373" exitCode=0 Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.007236 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"989652e5-d96a-4ae1-b95c-ecea25dbae4e","Type":"ContainerDied","Data":"ecef8bf18aadd498025e5a36e1ad6435e794efe39cbbdc30073a2d53d0862373"} Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.007277 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"989652e5-d96a-4ae1-b95c-ecea25dbae4e","Type":"ContainerDied","Data":"9cc8d26421be744bee758dfb059695904d8536ab6ed9af06ff5250457064096a"} Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.007292 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cc8d26421be744bee758dfb059695904d8536ab6ed9af06ff5250457064096a" Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.009749 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"2d84fda9-8996-4d17-84f4-df4479ed4ba2","Type":"ContainerStarted","Data":"1bc7500955a7c7c6d77d47aad04498e483de3ca0f12af3e69e482b4974ad7c99"} Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.027253 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.147030 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmtqh\" (UniqueName: \"kubernetes.io/projected/989652e5-d96a-4ae1-b95c-ecea25dbae4e-kube-api-access-zmtqh\") pod \"989652e5-d96a-4ae1-b95c-ecea25dbae4e\" (UID: \"989652e5-d96a-4ae1-b95c-ecea25dbae4e\") " Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.147382 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/989652e5-d96a-4ae1-b95c-ecea25dbae4e-logs\") pod \"989652e5-d96a-4ae1-b95c-ecea25dbae4e\" (UID: \"989652e5-d96a-4ae1-b95c-ecea25dbae4e\") " Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.147477 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/989652e5-d96a-4ae1-b95c-ecea25dbae4e-config-data\") pod \"989652e5-d96a-4ae1-b95c-ecea25dbae4e\" (UID: \"989652e5-d96a-4ae1-b95c-ecea25dbae4e\") " Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.148042 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/989652e5-d96a-4ae1-b95c-ecea25dbae4e-logs" (OuterVolumeSpecName: "logs") pod "989652e5-d96a-4ae1-b95c-ecea25dbae4e" (UID: "989652e5-d96a-4ae1-b95c-ecea25dbae4e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.159544 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/989652e5-d96a-4ae1-b95c-ecea25dbae4e-kube-api-access-zmtqh" (OuterVolumeSpecName: "kube-api-access-zmtqh") pod "989652e5-d96a-4ae1-b95c-ecea25dbae4e" (UID: "989652e5-d96a-4ae1-b95c-ecea25dbae4e"). InnerVolumeSpecName "kube-api-access-zmtqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.168415 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/989652e5-d96a-4ae1-b95c-ecea25dbae4e-config-data" (OuterVolumeSpecName: "config-data") pod "989652e5-d96a-4ae1-b95c-ecea25dbae4e" (UID: "989652e5-d96a-4ae1-b95c-ecea25dbae4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.249801 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmtqh\" (UniqueName: \"kubernetes.io/projected/989652e5-d96a-4ae1-b95c-ecea25dbae4e-kube-api-access-zmtqh\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.249844 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/989652e5-d96a-4ae1-b95c-ecea25dbae4e-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.249857 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/989652e5-d96a-4ae1-b95c-ecea25dbae4e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.305992 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.306047 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:49 crc kubenswrapper[4786]: I0127 13:34:49.474283 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="709420ac-3ac1-400d-9256-65483385afeb" path="/var/lib/kubelet/pods/709420ac-3ac1-400d-9256-65483385afeb/volumes" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.021820 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"2d84fda9-8996-4d17-84f4-df4479ed4ba2","Type":"ContainerStarted","Data":"b55fc94c0c9afff31148eab1daecd76d73551bb2b6a7182a14ae32148f408f77"} Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.021860 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.043135 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.043117868 podStartE2EDuration="2.043117868s" podCreationTimestamp="2026-01-27 13:34:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:34:50.042873461 +0000 UTC m=+1673.253487580" watchObservedRunningTime="2026-01-27 13:34:50.043117868 +0000 UTC m=+1673.253731987" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.063153 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.072053 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.079983 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:50 crc kubenswrapper[4786]: E0127 13:34:50.080322 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="989652e5-d96a-4ae1-b95c-ecea25dbae4e" containerName="nova-kuttl-api-api" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.080338 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="989652e5-d96a-4ae1-b95c-ecea25dbae4e" containerName="nova-kuttl-api-api" Jan 27 13:34:50 crc kubenswrapper[4786]: E0127 13:34:50.080350 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="989652e5-d96a-4ae1-b95c-ecea25dbae4e" containerName="nova-kuttl-api-log" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.080356 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="989652e5-d96a-4ae1-b95c-ecea25dbae4e" containerName="nova-kuttl-api-log" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.080505 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="989652e5-d96a-4ae1-b95c-ecea25dbae4e" containerName="nova-kuttl-api-api" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.080533 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="989652e5-d96a-4ae1-b95c-ecea25dbae4e" containerName="nova-kuttl-api-log" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.081528 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.083456 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.103316 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.265704 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e297e7-9827-4625-8881-d54fccb87ed5-config-data\") pod \"nova-kuttl-api-0\" (UID: \"e2e297e7-9827-4625-8881-d54fccb87ed5\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.265773 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kflw8\" (UniqueName: \"kubernetes.io/projected/e2e297e7-9827-4625-8881-d54fccb87ed5-kube-api-access-kflw8\") pod \"nova-kuttl-api-0\" (UID: \"e2e297e7-9827-4625-8881-d54fccb87ed5\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.266007 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2e297e7-9827-4625-8881-d54fccb87ed5-logs\") pod \"nova-kuttl-api-0\" (UID: \"e2e297e7-9827-4625-8881-d54fccb87ed5\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.367401 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e297e7-9827-4625-8881-d54fccb87ed5-config-data\") pod \"nova-kuttl-api-0\" (UID: \"e2e297e7-9827-4625-8881-d54fccb87ed5\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.367762 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kflw8\" (UniqueName: \"kubernetes.io/projected/e2e297e7-9827-4625-8881-d54fccb87ed5-kube-api-access-kflw8\") pod \"nova-kuttl-api-0\" (UID: \"e2e297e7-9827-4625-8881-d54fccb87ed5\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.367838 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2e297e7-9827-4625-8881-d54fccb87ed5-logs\") pod \"nova-kuttl-api-0\" (UID: \"e2e297e7-9827-4625-8881-d54fccb87ed5\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.368279 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2e297e7-9827-4625-8881-d54fccb87ed5-logs\") pod \"nova-kuttl-api-0\" (UID: \"e2e297e7-9827-4625-8881-d54fccb87ed5\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.382864 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e297e7-9827-4625-8881-d54fccb87ed5-config-data\") pod \"nova-kuttl-api-0\" (UID: \"e2e297e7-9827-4625-8881-d54fccb87ed5\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.385362 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kflw8\" (UniqueName: \"kubernetes.io/projected/e2e297e7-9827-4625-8881-d54fccb87ed5-kube-api-access-kflw8\") pod \"nova-kuttl-api-0\" (UID: \"e2e297e7-9827-4625-8881-d54fccb87ed5\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.399712 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:50 crc kubenswrapper[4786]: I0127 13:34:50.943916 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:51 crc kubenswrapper[4786]: I0127 13:34:51.044946 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e2e297e7-9827-4625-8881-d54fccb87ed5","Type":"ContainerStarted","Data":"931e94b3be4c7ee203ad4e9d47860eb11432fadbee83f3e0f77c57d2f7bd6ac5"} Jan 27 13:34:51 crc kubenswrapper[4786]: I0127 13:34:51.474350 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="989652e5-d96a-4ae1-b95c-ecea25dbae4e" path="/var/lib/kubelet/pods/989652e5-d96a-4ae1-b95c-ecea25dbae4e/volumes" Jan 27 13:34:52 crc kubenswrapper[4786]: I0127 13:34:52.053866 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e2e297e7-9827-4625-8881-d54fccb87ed5","Type":"ContainerStarted","Data":"cda21eeb34f4ccc965ebeaca0b5d6e7d0d259b5ce1901f393ea44c32654535a2"} Jan 27 13:34:52 crc kubenswrapper[4786]: I0127 13:34:52.053908 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e2e297e7-9827-4625-8881-d54fccb87ed5","Type":"ContainerStarted","Data":"e0f3ddfa94641fb3a73e05d26f68718fd56ccff9827c7204dc0bd81a4263ebac"} Jan 27 13:34:52 crc kubenswrapper[4786]: I0127 13:34:52.055342 4786 generic.go:334] "Generic (PLEG): container finished" podID="da9c38f7-ea97-4faa-922f-a3087cee1b21" containerID="5271c694b2af73ba565f0638988720da96c7ed728357af14050c55179f143e5a" exitCode=0 Jan 27 13:34:52 crc kubenswrapper[4786]: I0127 13:34:52.055364 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" event={"ID":"da9c38f7-ea97-4faa-922f-a3087cee1b21","Type":"ContainerDied","Data":"5271c694b2af73ba565f0638988720da96c7ed728357af14050c55179f143e5a"} Jan 27 13:34:52 crc kubenswrapper[4786]: I0127 13:34:52.076798 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.076780936 podStartE2EDuration="2.076780936s" podCreationTimestamp="2026-01-27 13:34:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:34:52.069874257 +0000 UTC m=+1675.280488396" watchObservedRunningTime="2026-01-27 13:34:52.076780936 +0000 UTC m=+1675.287395055" Jan 27 13:34:53 crc kubenswrapper[4786]: I0127 13:34:53.374010 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:53 crc kubenswrapper[4786]: I0127 13:34:53.382198 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" Jan 27 13:34:53 crc kubenswrapper[4786]: I0127 13:34:53.532830 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9c38f7-ea97-4faa-922f-a3087cee1b21-config-data\") pod \"da9c38f7-ea97-4faa-922f-a3087cee1b21\" (UID: \"da9c38f7-ea97-4faa-922f-a3087cee1b21\") " Jan 27 13:34:53 crc kubenswrapper[4786]: I0127 13:34:53.532934 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9htt\" (UniqueName: \"kubernetes.io/projected/da9c38f7-ea97-4faa-922f-a3087cee1b21-kube-api-access-q9htt\") pod \"da9c38f7-ea97-4faa-922f-a3087cee1b21\" (UID: \"da9c38f7-ea97-4faa-922f-a3087cee1b21\") " Jan 27 13:34:53 crc kubenswrapper[4786]: I0127 13:34:53.533012 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da9c38f7-ea97-4faa-922f-a3087cee1b21-scripts\") pod \"da9c38f7-ea97-4faa-922f-a3087cee1b21\" (UID: \"da9c38f7-ea97-4faa-922f-a3087cee1b21\") " Jan 27 13:34:53 crc kubenswrapper[4786]: I0127 13:34:53.540064 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da9c38f7-ea97-4faa-922f-a3087cee1b21-scripts" (OuterVolumeSpecName: "scripts") pod "da9c38f7-ea97-4faa-922f-a3087cee1b21" (UID: "da9c38f7-ea97-4faa-922f-a3087cee1b21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:53 crc kubenswrapper[4786]: I0127 13:34:53.544858 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da9c38f7-ea97-4faa-922f-a3087cee1b21-kube-api-access-q9htt" (OuterVolumeSpecName: "kube-api-access-q9htt") pod "da9c38f7-ea97-4faa-922f-a3087cee1b21" (UID: "da9c38f7-ea97-4faa-922f-a3087cee1b21"). InnerVolumeSpecName "kube-api-access-q9htt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:53 crc kubenswrapper[4786]: I0127 13:34:53.558050 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da9c38f7-ea97-4faa-922f-a3087cee1b21-config-data" (OuterVolumeSpecName: "config-data") pod "da9c38f7-ea97-4faa-922f-a3087cee1b21" (UID: "da9c38f7-ea97-4faa-922f-a3087cee1b21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:53 crc kubenswrapper[4786]: I0127 13:34:53.638545 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9c38f7-ea97-4faa-922f-a3087cee1b21-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:53 crc kubenswrapper[4786]: I0127 13:34:53.638591 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9htt\" (UniqueName: \"kubernetes.io/projected/da9c38f7-ea97-4faa-922f-a3087cee1b21-kube-api-access-q9htt\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:53 crc kubenswrapper[4786]: I0127 13:34:53.638605 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da9c38f7-ea97-4faa-922f-a3087cee1b21-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:54 crc kubenswrapper[4786]: I0127 13:34:54.071282 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" event={"ID":"da9c38f7-ea97-4faa-922f-a3087cee1b21","Type":"ContainerDied","Data":"b15996921878512eb4a55591bdfeff049ec86a7ab82e24d53ec8fe33039e07b9"} Jan 27 13:34:54 crc kubenswrapper[4786]: I0127 13:34:54.071326 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b15996921878512eb4a55591bdfeff049ec86a7ab82e24d53ec8fe33039e07b9" Jan 27 13:34:54 crc kubenswrapper[4786]: I0127 13:34:54.071353 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z" Jan 27 13:34:54 crc kubenswrapper[4786]: I0127 13:34:54.259831 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:54 crc kubenswrapper[4786]: I0127 13:34:54.260036 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e2e297e7-9827-4625-8881-d54fccb87ed5" containerName="nova-kuttl-api-log" containerID="cri-o://e0f3ddfa94641fb3a73e05d26f68718fd56ccff9827c7204dc0bd81a4263ebac" gracePeriod=30 Jan 27 13:34:54 crc kubenswrapper[4786]: I0127 13:34:54.260120 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e2e297e7-9827-4625-8881-d54fccb87ed5" containerName="nova-kuttl-api-api" containerID="cri-o://cda21eeb34f4ccc965ebeaca0b5d6e7d0d259b5ce1901f393ea44c32654535a2" gracePeriod=30 Jan 27 13:34:54 crc kubenswrapper[4786]: I0127 13:34:54.293050 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:54 crc kubenswrapper[4786]: I0127 13:34:54.293276 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="2d84fda9-8996-4d17-84f4-df4479ed4ba2" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://b55fc94c0c9afff31148eab1daecd76d73551bb2b6a7182a14ae32148f408f77" gracePeriod=30 Jan 27 13:34:54 crc kubenswrapper[4786]: I0127 13:34:54.305832 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:54 crc kubenswrapper[4786]: I0127 13:34:54.306339 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:34:54 crc kubenswrapper[4786]: I0127 13:34:54.309320 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:34:55 crc kubenswrapper[4786]: I0127 13:34:55.091057 4786 generic.go:334] "Generic (PLEG): container finished" podID="e2e297e7-9827-4625-8881-d54fccb87ed5" containerID="cda21eeb34f4ccc965ebeaca0b5d6e7d0d259b5ce1901f393ea44c32654535a2" exitCode=0 Jan 27 13:34:55 crc kubenswrapper[4786]: I0127 13:34:55.091329 4786 generic.go:334] "Generic (PLEG): container finished" podID="e2e297e7-9827-4625-8881-d54fccb87ed5" containerID="e0f3ddfa94641fb3a73e05d26f68718fd56ccff9827c7204dc0bd81a4263ebac" exitCode=143 Jan 27 13:34:55 crc kubenswrapper[4786]: I0127 13:34:55.092103 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e2e297e7-9827-4625-8881-d54fccb87ed5","Type":"ContainerDied","Data":"cda21eeb34f4ccc965ebeaca0b5d6e7d0d259b5ce1901f393ea44c32654535a2"} Jan 27 13:34:55 crc kubenswrapper[4786]: I0127 13:34:55.092130 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e2e297e7-9827-4625-8881-d54fccb87ed5","Type":"ContainerDied","Data":"e0f3ddfa94641fb3a73e05d26f68718fd56ccff9827c7204dc0bd81a4263ebac"} Jan 27 13:34:55 crc kubenswrapper[4786]: I0127 13:34:55.142527 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:55 crc kubenswrapper[4786]: I0127 13:34:55.268905 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2e297e7-9827-4625-8881-d54fccb87ed5-logs\") pod \"e2e297e7-9827-4625-8881-d54fccb87ed5\" (UID: \"e2e297e7-9827-4625-8881-d54fccb87ed5\") " Jan 27 13:34:55 crc kubenswrapper[4786]: I0127 13:34:55.269020 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kflw8\" (UniqueName: \"kubernetes.io/projected/e2e297e7-9827-4625-8881-d54fccb87ed5-kube-api-access-kflw8\") pod \"e2e297e7-9827-4625-8881-d54fccb87ed5\" (UID: \"e2e297e7-9827-4625-8881-d54fccb87ed5\") " Jan 27 13:34:55 crc kubenswrapper[4786]: I0127 13:34:55.269062 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e297e7-9827-4625-8881-d54fccb87ed5-config-data\") pod \"e2e297e7-9827-4625-8881-d54fccb87ed5\" (UID: \"e2e297e7-9827-4625-8881-d54fccb87ed5\") " Jan 27 13:34:55 crc kubenswrapper[4786]: I0127 13:34:55.269758 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e297e7-9827-4625-8881-d54fccb87ed5-logs" (OuterVolumeSpecName: "logs") pod "e2e297e7-9827-4625-8881-d54fccb87ed5" (UID: "e2e297e7-9827-4625-8881-d54fccb87ed5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:34:55 crc kubenswrapper[4786]: I0127 13:34:55.277758 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e297e7-9827-4625-8881-d54fccb87ed5-kube-api-access-kflw8" (OuterVolumeSpecName: "kube-api-access-kflw8") pod "e2e297e7-9827-4625-8881-d54fccb87ed5" (UID: "e2e297e7-9827-4625-8881-d54fccb87ed5"). InnerVolumeSpecName "kube-api-access-kflw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:55 crc kubenswrapper[4786]: I0127 13:34:55.308471 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e297e7-9827-4625-8881-d54fccb87ed5-config-data" (OuterVolumeSpecName: "config-data") pod "e2e297e7-9827-4625-8881-d54fccb87ed5" (UID: "e2e297e7-9827-4625-8881-d54fccb87ed5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:55 crc kubenswrapper[4786]: I0127 13:34:55.371127 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2e297e7-9827-4625-8881-d54fccb87ed5-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:55 crc kubenswrapper[4786]: I0127 13:34:55.371175 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kflw8\" (UniqueName: \"kubernetes.io/projected/e2e297e7-9827-4625-8881-d54fccb87ed5-kube-api-access-kflw8\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:55 crc kubenswrapper[4786]: I0127 13:34:55.371189 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e297e7-9827-4625-8881-d54fccb87ed5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:55 crc kubenswrapper[4786]: I0127 13:34:55.388928 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.162:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:34:55 crc kubenswrapper[4786]: I0127 13:34:55.389053 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.162:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.100268 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1" containerName="nova-kuttl-metadata-log" containerID="cri-o://75a5cd3bdf78b620c4be2a5f6bc59b54e25a25cd522e3be3fa7db022268ef66d" gracePeriod=30 Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.100588 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.101228 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e2e297e7-9827-4625-8881-d54fccb87ed5","Type":"ContainerDied","Data":"931e94b3be4c7ee203ad4e9d47860eb11432fadbee83f3e0f77c57d2f7bd6ac5"} Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.101261 4786 scope.go:117] "RemoveContainer" containerID="cda21eeb34f4ccc965ebeaca0b5d6e7d0d259b5ce1901f393ea44c32654535a2" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.101539 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://e63bce6daf2cf42b8c3e22a52e75ae6ea62e00717c66ade206c71317a201a851" gracePeriod=30 Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.182191 4786 scope.go:117] "RemoveContainer" containerID="e0f3ddfa94641fb3a73e05d26f68718fd56ccff9827c7204dc0bd81a4263ebac" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.203531 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.207437 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.224766 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:56 crc kubenswrapper[4786]: E0127 13:34:56.225221 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e297e7-9827-4625-8881-d54fccb87ed5" containerName="nova-kuttl-api-api" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.225240 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e297e7-9827-4625-8881-d54fccb87ed5" containerName="nova-kuttl-api-api" Jan 27 13:34:56 crc kubenswrapper[4786]: E0127 13:34:56.225264 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e297e7-9827-4625-8881-d54fccb87ed5" containerName="nova-kuttl-api-log" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.225271 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e297e7-9827-4625-8881-d54fccb87ed5" containerName="nova-kuttl-api-log" Jan 27 13:34:56 crc kubenswrapper[4786]: E0127 13:34:56.225287 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9c38f7-ea97-4faa-922f-a3087cee1b21" containerName="nova-manage" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.225294 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9c38f7-ea97-4faa-922f-a3087cee1b21" containerName="nova-manage" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.225481 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9c38f7-ea97-4faa-922f-a3087cee1b21" containerName="nova-manage" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.225528 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e297e7-9827-4625-8881-d54fccb87ed5" containerName="nova-kuttl-api-api" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.225540 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e297e7-9827-4625-8881-d54fccb87ed5" containerName="nova-kuttl-api-log" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.226495 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.228916 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.246194 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.390528 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9b97\" (UniqueName: \"kubernetes.io/projected/51c7d69a-7abb-4abe-8954-874bcd3c8f33-kube-api-access-z9b97\") pod \"nova-kuttl-api-0\" (UID: \"51c7d69a-7abb-4abe-8954-874bcd3c8f33\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.390595 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c7d69a-7abb-4abe-8954-874bcd3c8f33-logs\") pod \"nova-kuttl-api-0\" (UID: \"51c7d69a-7abb-4abe-8954-874bcd3c8f33\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.390666 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c7d69a-7abb-4abe-8954-874bcd3c8f33-config-data\") pod \"nova-kuttl-api-0\" (UID: \"51c7d69a-7abb-4abe-8954-874bcd3c8f33\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.465453 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:34:56 crc kubenswrapper[4786]: E0127 13:34:56.465863 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.492112 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9b97\" (UniqueName: \"kubernetes.io/projected/51c7d69a-7abb-4abe-8954-874bcd3c8f33-kube-api-access-z9b97\") pod \"nova-kuttl-api-0\" (UID: \"51c7d69a-7abb-4abe-8954-874bcd3c8f33\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.492275 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c7d69a-7abb-4abe-8954-874bcd3c8f33-logs\") pod \"nova-kuttl-api-0\" (UID: \"51c7d69a-7abb-4abe-8954-874bcd3c8f33\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.492361 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c7d69a-7abb-4abe-8954-874bcd3c8f33-config-data\") pod \"nova-kuttl-api-0\" (UID: \"51c7d69a-7abb-4abe-8954-874bcd3c8f33\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.493534 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c7d69a-7abb-4abe-8954-874bcd3c8f33-logs\") pod \"nova-kuttl-api-0\" (UID: \"51c7d69a-7abb-4abe-8954-874bcd3c8f33\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.498540 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c7d69a-7abb-4abe-8954-874bcd3c8f33-config-data\") pod \"nova-kuttl-api-0\" (UID: \"51c7d69a-7abb-4abe-8954-874bcd3c8f33\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.509921 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9b97\" (UniqueName: \"kubernetes.io/projected/51c7d69a-7abb-4abe-8954-874bcd3c8f33-kube-api-access-z9b97\") pod \"nova-kuttl-api-0\" (UID: \"51c7d69a-7abb-4abe-8954-874bcd3c8f33\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.550689 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.601519 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.695820 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6jgr\" (UniqueName: \"kubernetes.io/projected/2d84fda9-8996-4d17-84f4-df4479ed4ba2-kube-api-access-r6jgr\") pod \"2d84fda9-8996-4d17-84f4-df4479ed4ba2\" (UID: \"2d84fda9-8996-4d17-84f4-df4479ed4ba2\") " Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.696088 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d84fda9-8996-4d17-84f4-df4479ed4ba2-config-data\") pod \"2d84fda9-8996-4d17-84f4-df4479ed4ba2\" (UID: \"2d84fda9-8996-4d17-84f4-df4479ed4ba2\") " Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.708473 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d84fda9-8996-4d17-84f4-df4479ed4ba2-kube-api-access-r6jgr" (OuterVolumeSpecName: "kube-api-access-r6jgr") pod "2d84fda9-8996-4d17-84f4-df4479ed4ba2" (UID: "2d84fda9-8996-4d17-84f4-df4479ed4ba2"). InnerVolumeSpecName "kube-api-access-r6jgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.724120 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d84fda9-8996-4d17-84f4-df4479ed4ba2-config-data" (OuterVolumeSpecName: "config-data") pod "2d84fda9-8996-4d17-84f4-df4479ed4ba2" (UID: "2d84fda9-8996-4d17-84f4-df4479ed4ba2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.798501 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d84fda9-8996-4d17-84f4-df4479ed4ba2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:56 crc kubenswrapper[4786]: I0127 13:34:56.798564 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6jgr\" (UniqueName: \"kubernetes.io/projected/2d84fda9-8996-4d17-84f4-df4479ed4ba2-kube-api-access-r6jgr\") on node \"crc\" DevicePath \"\"" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.029366 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.109602 4786 generic.go:334] "Generic (PLEG): container finished" podID="2d84fda9-8996-4d17-84f4-df4479ed4ba2" containerID="b55fc94c0c9afff31148eab1daecd76d73551bb2b6a7182a14ae32148f408f77" exitCode=0 Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.109666 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"2d84fda9-8996-4d17-84f4-df4479ed4ba2","Type":"ContainerDied","Data":"b55fc94c0c9afff31148eab1daecd76d73551bb2b6a7182a14ae32148f408f77"} Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.109679 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.109700 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"2d84fda9-8996-4d17-84f4-df4479ed4ba2","Type":"ContainerDied","Data":"1bc7500955a7c7c6d77d47aad04498e483de3ca0f12af3e69e482b4974ad7c99"} Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.109712 4786 scope.go:117] "RemoveContainer" containerID="b55fc94c0c9afff31148eab1daecd76d73551bb2b6a7182a14ae32148f408f77" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.113812 4786 generic.go:334] "Generic (PLEG): container finished" podID="2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1" containerID="75a5cd3bdf78b620c4be2a5f6bc59b54e25a25cd522e3be3fa7db022268ef66d" exitCode=143 Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.113922 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1","Type":"ContainerDied","Data":"75a5cd3bdf78b620c4be2a5f6bc59b54e25a25cd522e3be3fa7db022268ef66d"} Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.115481 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"51c7d69a-7abb-4abe-8954-874bcd3c8f33","Type":"ContainerStarted","Data":"1166250b16a40ab3ef7919b3daf76fae960464713a73ed89198d2f232f22bf68"} Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.132258 4786 scope.go:117] "RemoveContainer" containerID="b55fc94c0c9afff31148eab1daecd76d73551bb2b6a7182a14ae32148f408f77" Jan 27 13:34:57 crc kubenswrapper[4786]: E0127 13:34:57.132823 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b55fc94c0c9afff31148eab1daecd76d73551bb2b6a7182a14ae32148f408f77\": container with ID starting with b55fc94c0c9afff31148eab1daecd76d73551bb2b6a7182a14ae32148f408f77 not found: ID does not exist" containerID="b55fc94c0c9afff31148eab1daecd76d73551bb2b6a7182a14ae32148f408f77" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.132864 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b55fc94c0c9afff31148eab1daecd76d73551bb2b6a7182a14ae32148f408f77"} err="failed to get container status \"b55fc94c0c9afff31148eab1daecd76d73551bb2b6a7182a14ae32148f408f77\": rpc error: code = NotFound desc = could not find container \"b55fc94c0c9afff31148eab1daecd76d73551bb2b6a7182a14ae32148f408f77\": container with ID starting with b55fc94c0c9afff31148eab1daecd76d73551bb2b6a7182a14ae32148f408f77 not found: ID does not exist" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.157875 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.169339 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.189426 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:57 crc kubenswrapper[4786]: E0127 13:34:57.189827 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d84fda9-8996-4d17-84f4-df4479ed4ba2" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.189846 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d84fda9-8996-4d17-84f4-df4479ed4ba2" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.190000 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d84fda9-8996-4d17-84f4-df4479ed4ba2" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.190540 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.190638 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.203001 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.306465 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1abe3dd-6930-44ad-80c7-de3757265d02-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"f1abe3dd-6930-44ad-80c7-de3757265d02\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.306571 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bdkp\" (UniqueName: \"kubernetes.io/projected/f1abe3dd-6930-44ad-80c7-de3757265d02-kube-api-access-8bdkp\") pod \"nova-kuttl-scheduler-0\" (UID: \"f1abe3dd-6930-44ad-80c7-de3757265d02\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.408258 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1abe3dd-6930-44ad-80c7-de3757265d02-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"f1abe3dd-6930-44ad-80c7-de3757265d02\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.408794 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bdkp\" (UniqueName: \"kubernetes.io/projected/f1abe3dd-6930-44ad-80c7-de3757265d02-kube-api-access-8bdkp\") pod \"nova-kuttl-scheduler-0\" (UID: \"f1abe3dd-6930-44ad-80c7-de3757265d02\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.410449 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.422259 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1abe3dd-6930-44ad-80c7-de3757265d02-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"f1abe3dd-6930-44ad-80c7-de3757265d02\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.424144 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bdkp\" (UniqueName: \"kubernetes.io/projected/f1abe3dd-6930-44ad-80c7-de3757265d02-kube-api-access-8bdkp\") pod \"nova-kuttl-scheduler-0\" (UID: \"f1abe3dd-6930-44ad-80c7-de3757265d02\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.474905 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d84fda9-8996-4d17-84f4-df4479ed4ba2" path="/var/lib/kubelet/pods/2d84fda9-8996-4d17-84f4-df4479ed4ba2/volumes" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.476143 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2e297e7-9827-4625-8881-d54fccb87ed5" path="/var/lib/kubelet/pods/e2e297e7-9827-4625-8881-d54fccb87ed5/volumes" Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.517339 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:34:57 crc kubenswrapper[4786]: W0127 13:34:57.942719 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1abe3dd_6930_44ad_80c7_de3757265d02.slice/crio-c2b818d5abaeb2edb2acea50e7c034e7f7561e09c1791817468b72f9016c9e00 WatchSource:0}: Error finding container c2b818d5abaeb2edb2acea50e7c034e7f7561e09c1791817468b72f9016c9e00: Status 404 returned error can't find the container with id c2b818d5abaeb2edb2acea50e7c034e7f7561e09c1791817468b72f9016c9e00 Jan 27 13:34:57 crc kubenswrapper[4786]: I0127 13:34:57.945158 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:34:58 crc kubenswrapper[4786]: I0127 13:34:58.134202 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"f1abe3dd-6930-44ad-80c7-de3757265d02","Type":"ContainerStarted","Data":"79f500f914e78ea163f0fcaea04e0c1859fe73ef1f570f8ef0fc67b224e87242"} Jan 27 13:34:58 crc kubenswrapper[4786]: I0127 13:34:58.134281 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"f1abe3dd-6930-44ad-80c7-de3757265d02","Type":"ContainerStarted","Data":"c2b818d5abaeb2edb2acea50e7c034e7f7561e09c1791817468b72f9016c9e00"} Jan 27 13:34:58 crc kubenswrapper[4786]: I0127 13:34:58.139016 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"51c7d69a-7abb-4abe-8954-874bcd3c8f33","Type":"ContainerStarted","Data":"7becddc47e821d4547b2bcaf14514e7a9b3cf9c84f09e3a37b5ebe5b5106e95b"} Jan 27 13:34:58 crc kubenswrapper[4786]: I0127 13:34:58.139076 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"51c7d69a-7abb-4abe-8954-874bcd3c8f33","Type":"ContainerStarted","Data":"21fabf85db70b1388eab53d6a7c98be1c30e2e42b8fa9b0027542aba48182fbd"} Jan 27 13:34:58 crc kubenswrapper[4786]: I0127 13:34:58.159197 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.159181711 podStartE2EDuration="1.159181711s" podCreationTimestamp="2026-01-27 13:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:34:58.153036584 +0000 UTC m=+1681.363650723" watchObservedRunningTime="2026-01-27 13:34:58.159181711 +0000 UTC m=+1681.369795830" Jan 27 13:34:58 crc kubenswrapper[4786]: I0127 13:34:58.184696 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.184673268 podStartE2EDuration="2.184673268s" podCreationTimestamp="2026-01-27 13:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:34:58.176304399 +0000 UTC m=+1681.386918538" watchObservedRunningTime="2026-01-27 13:34:58.184673268 +0000 UTC m=+1681.395287407" Jan 27 13:35:00 crc kubenswrapper[4786]: I0127 13:35:00.154768 4786 generic.go:334] "Generic (PLEG): container finished" podID="2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1" containerID="e63bce6daf2cf42b8c3e22a52e75ae6ea62e00717c66ade206c71317a201a851" exitCode=0 Jan 27 13:35:00 crc kubenswrapper[4786]: I0127 13:35:00.154813 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1","Type":"ContainerDied","Data":"e63bce6daf2cf42b8c3e22a52e75ae6ea62e00717c66ade206c71317a201a851"} Jan 27 13:35:00 crc kubenswrapper[4786]: I0127 13:35:00.388173 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:00 crc kubenswrapper[4786]: I0127 13:35:00.557884 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-logs\") pod \"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1\" (UID: \"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1\") " Jan 27 13:35:00 crc kubenswrapper[4786]: I0127 13:35:00.558198 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkv9d\" (UniqueName: \"kubernetes.io/projected/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-kube-api-access-vkv9d\") pod \"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1\" (UID: \"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1\") " Jan 27 13:35:00 crc kubenswrapper[4786]: I0127 13:35:00.558339 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-config-data\") pod \"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1\" (UID: \"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1\") " Jan 27 13:35:00 crc kubenswrapper[4786]: I0127 13:35:00.558423 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-logs" (OuterVolumeSpecName: "logs") pod "2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1" (UID: "2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:35:00 crc kubenswrapper[4786]: I0127 13:35:00.558809 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:35:00 crc kubenswrapper[4786]: I0127 13:35:00.563685 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-kube-api-access-vkv9d" (OuterVolumeSpecName: "kube-api-access-vkv9d") pod "2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1" (UID: "2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1"). InnerVolumeSpecName "kube-api-access-vkv9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:35:00 crc kubenswrapper[4786]: I0127 13:35:00.586862 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-config-data" (OuterVolumeSpecName: "config-data") pod "2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1" (UID: "2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:35:00 crc kubenswrapper[4786]: I0127 13:35:00.663596 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkv9d\" (UniqueName: \"kubernetes.io/projected/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-kube-api-access-vkv9d\") on node \"crc\" DevicePath \"\"" Jan 27 13:35:00 crc kubenswrapper[4786]: I0127 13:35:00.663766 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.165963 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1","Type":"ContainerDied","Data":"51104a330730b609dc5fbd534c125cd3e02d083da8d39df1235205d2bdf30b60"} Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.166288 4786 scope.go:117] "RemoveContainer" containerID="e63bce6daf2cf42b8c3e22a52e75ae6ea62e00717c66ade206c71317a201a851" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.165998 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.194813 4786 scope.go:117] "RemoveContainer" containerID="75a5cd3bdf78b620c4be2a5f6bc59b54e25a25cd522e3be3fa7db022268ef66d" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.210066 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.221125 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.230971 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:35:01 crc kubenswrapper[4786]: E0127 13:35:01.231510 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1" containerName="nova-kuttl-metadata-log" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.231591 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1" containerName="nova-kuttl-metadata-log" Jan 27 13:35:01 crc kubenswrapper[4786]: E0127 13:35:01.231696 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1" containerName="nova-kuttl-metadata-metadata" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.231751 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1" containerName="nova-kuttl-metadata-metadata" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.232034 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1" containerName="nova-kuttl-metadata-log" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.232141 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1" containerName="nova-kuttl-metadata-metadata" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.233371 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.236447 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.256975 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.269504 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmv22\" (UniqueName: \"kubernetes.io/projected/2ea95db7-d568-46c3-8874-98748692c4fe-kube-api-access-wmv22\") pod \"nova-kuttl-metadata-0\" (UID: \"2ea95db7-d568-46c3-8874-98748692c4fe\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.269573 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea95db7-d568-46c3-8874-98748692c4fe-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"2ea95db7-d568-46c3-8874-98748692c4fe\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.269745 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea95db7-d568-46c3-8874-98748692c4fe-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"2ea95db7-d568-46c3-8874-98748692c4fe\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.371036 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmv22\" (UniqueName: \"kubernetes.io/projected/2ea95db7-d568-46c3-8874-98748692c4fe-kube-api-access-wmv22\") pod \"nova-kuttl-metadata-0\" (UID: \"2ea95db7-d568-46c3-8874-98748692c4fe\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.371110 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea95db7-d568-46c3-8874-98748692c4fe-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"2ea95db7-d568-46c3-8874-98748692c4fe\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.371180 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea95db7-d568-46c3-8874-98748692c4fe-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"2ea95db7-d568-46c3-8874-98748692c4fe\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.371600 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea95db7-d568-46c3-8874-98748692c4fe-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"2ea95db7-d568-46c3-8874-98748692c4fe\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.375636 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea95db7-d568-46c3-8874-98748692c4fe-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"2ea95db7-d568-46c3-8874-98748692c4fe\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.396907 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmv22\" (UniqueName: \"kubernetes.io/projected/2ea95db7-d568-46c3-8874-98748692c4fe-kube-api-access-wmv22\") pod \"nova-kuttl-metadata-0\" (UID: \"2ea95db7-d568-46c3-8874-98748692c4fe\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.474968 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1" path="/var/lib/kubelet/pods/2b634eb7-8f8f-4acf-96ab-b0b0b02c4ba1/volumes" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.554107 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:01 crc kubenswrapper[4786]: I0127 13:35:01.979904 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:35:02 crc kubenswrapper[4786]: I0127 13:35:02.177189 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2ea95db7-d568-46c3-8874-98748692c4fe","Type":"ContainerStarted","Data":"e7de9818126b48e425ef32264a59b9dd0b179c62792ab1459874ea7fa290f66e"} Jan 27 13:35:02 crc kubenswrapper[4786]: I0127 13:35:02.518015 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:35:03 crc kubenswrapper[4786]: I0127 13:35:03.185232 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2ea95db7-d568-46c3-8874-98748692c4fe","Type":"ContainerStarted","Data":"104a660849b90da9aa6f1ac9393aefd431f96d0207343c6ddba13d10403447c6"} Jan 27 13:35:04 crc kubenswrapper[4786]: I0127 13:35:04.194856 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2ea95db7-d568-46c3-8874-98748692c4fe","Type":"ContainerStarted","Data":"ba56aca22a2e60ed77549cca35bf7bd0cea1e10bb1b49792edabcb6881995ba3"} Jan 27 13:35:05 crc kubenswrapper[4786]: I0127 13:35:05.226649 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=4.226630576 podStartE2EDuration="4.226630576s" podCreationTimestamp="2026-01-27 13:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:35:05.220000785 +0000 UTC m=+1688.430614904" watchObservedRunningTime="2026-01-27 13:35:05.226630576 +0000 UTC m=+1688.437244695" Jan 27 13:35:06 crc kubenswrapper[4786]: I0127 13:35:06.554864 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:06 crc kubenswrapper[4786]: I0127 13:35:06.554975 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:06 crc kubenswrapper[4786]: I0127 13:35:06.601992 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:35:06 crc kubenswrapper[4786]: I0127 13:35:06.602113 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:35:07 crc kubenswrapper[4786]: I0127 13:35:07.518422 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:35:07 crc kubenswrapper[4786]: I0127 13:35:07.546898 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:35:07 crc kubenswrapper[4786]: I0127 13:35:07.686843 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="51c7d69a-7abb-4abe-8954-874bcd3c8f33" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.166:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:35:07 crc kubenswrapper[4786]: I0127 13:35:07.686866 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="51c7d69a-7abb-4abe-8954-874bcd3c8f33" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.166:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:35:08 crc kubenswrapper[4786]: I0127 13:35:08.274676 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:35:10 crc kubenswrapper[4786]: I0127 13:35:10.464903 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:35:10 crc kubenswrapper[4786]: E0127 13:35:10.465447 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:35:11 crc kubenswrapper[4786]: I0127 13:35:11.554762 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:11 crc kubenswrapper[4786]: I0127 13:35:11.554873 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:12 crc kubenswrapper[4786]: I0127 13:35:12.639852 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="2ea95db7-d568-46c3-8874-98748692c4fe" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.168:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:35:12 crc kubenswrapper[4786]: I0127 13:35:12.640042 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="2ea95db7-d568-46c3-8874-98748692c4fe" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.168:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:35:16 crc kubenswrapper[4786]: I0127 13:35:16.605265 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:35:16 crc kubenswrapper[4786]: I0127 13:35:16.607032 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:35:16 crc kubenswrapper[4786]: I0127 13:35:16.609130 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:35:16 crc kubenswrapper[4786]: I0127 13:35:16.612977 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:35:17 crc kubenswrapper[4786]: I0127 13:35:17.311163 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:35:17 crc kubenswrapper[4786]: I0127 13:35:17.314781 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:35:21 crc kubenswrapper[4786]: I0127 13:35:21.557375 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:21 crc kubenswrapper[4786]: I0127 13:35:21.558760 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:21 crc kubenswrapper[4786]: I0127 13:35:21.560780 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:21 crc kubenswrapper[4786]: I0127 13:35:21.561173 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.465228 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:35:22 crc kubenswrapper[4786]: E0127 13:35:22.465429 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.612589 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.613968 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.649702 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.651438 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.655971 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.679878 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.685675 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01603e06-2096-4686-934f-59aade63c30d-config-data\") pod \"nova-kuttl-api-2\" (UID: \"01603e06-2096-4686-934f-59aade63c30d\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.685746 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01603e06-2096-4686-934f-59aade63c30d-logs\") pod \"nova-kuttl-api-2\" (UID: \"01603e06-2096-4686-934f-59aade63c30d\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.685832 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz4xs\" (UniqueName: \"kubernetes.io/projected/01603e06-2096-4686-934f-59aade63c30d-kube-api-access-vz4xs\") pod \"nova-kuttl-api-2\" (UID: \"01603e06-2096-4686-934f-59aade63c30d\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.787543 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51df781b-8436-4545-8e9a-c9770e35814b-logs\") pod \"nova-kuttl-api-1\" (UID: \"51df781b-8436-4545-8e9a-c9770e35814b\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.787622 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51df781b-8436-4545-8e9a-c9770e35814b-config-data\") pod \"nova-kuttl-api-1\" (UID: \"51df781b-8436-4545-8e9a-c9770e35814b\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.787646 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcfhb\" (UniqueName: \"kubernetes.io/projected/51df781b-8436-4545-8e9a-c9770e35814b-kube-api-access-dcfhb\") pod \"nova-kuttl-api-1\" (UID: \"51df781b-8436-4545-8e9a-c9770e35814b\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.787684 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01603e06-2096-4686-934f-59aade63c30d-config-data\") pod \"nova-kuttl-api-2\" (UID: \"01603e06-2096-4686-934f-59aade63c30d\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.787726 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01603e06-2096-4686-934f-59aade63c30d-logs\") pod \"nova-kuttl-api-2\" (UID: \"01603e06-2096-4686-934f-59aade63c30d\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.787774 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz4xs\" (UniqueName: \"kubernetes.io/projected/01603e06-2096-4686-934f-59aade63c30d-kube-api-access-vz4xs\") pod \"nova-kuttl-api-2\" (UID: \"01603e06-2096-4686-934f-59aade63c30d\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.788331 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01603e06-2096-4686-934f-59aade63c30d-logs\") pod \"nova-kuttl-api-2\" (UID: \"01603e06-2096-4686-934f-59aade63c30d\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.793408 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01603e06-2096-4686-934f-59aade63c30d-config-data\") pod \"nova-kuttl-api-2\" (UID: \"01603e06-2096-4686-934f-59aade63c30d\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.803642 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz4xs\" (UniqueName: \"kubernetes.io/projected/01603e06-2096-4686-934f-59aade63c30d-kube-api-access-vz4xs\") pod \"nova-kuttl-api-2\" (UID: \"01603e06-2096-4686-934f-59aade63c30d\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.889353 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51df781b-8436-4545-8e9a-c9770e35814b-logs\") pod \"nova-kuttl-api-1\" (UID: \"51df781b-8436-4545-8e9a-c9770e35814b\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.889414 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51df781b-8436-4545-8e9a-c9770e35814b-config-data\") pod \"nova-kuttl-api-1\" (UID: \"51df781b-8436-4545-8e9a-c9770e35814b\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.889440 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcfhb\" (UniqueName: \"kubernetes.io/projected/51df781b-8436-4545-8e9a-c9770e35814b-kube-api-access-dcfhb\") pod \"nova-kuttl-api-1\" (UID: \"51df781b-8436-4545-8e9a-c9770e35814b\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.890546 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51df781b-8436-4545-8e9a-c9770e35814b-logs\") pod \"nova-kuttl-api-1\" (UID: \"51df781b-8436-4545-8e9a-c9770e35814b\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.893251 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51df781b-8436-4545-8e9a-c9770e35814b-config-data\") pod \"nova-kuttl-api-1\" (UID: \"51df781b-8436-4545-8e9a-c9770e35814b\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.915147 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcfhb\" (UniqueName: \"kubernetes.io/projected/51df781b-8436-4545-8e9a-c9770e35814b-kube-api-access-dcfhb\") pod \"nova-kuttl-api-1\" (UID: \"51df781b-8436-4545-8e9a-c9770e35814b\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.933446 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.934684 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.934841 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.942168 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.943198 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.953339 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.967269 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.974950 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.991284 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kj5c\" (UniqueName: \"kubernetes.io/projected/1247ecc9-9177-446b-b939-3d158d4d0cd0-kube-api-access-4kj5c\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"1247ecc9-9177-446b-b939-3d158d4d0cd0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 27 13:35:22 crc kubenswrapper[4786]: I0127 13:35:22.991366 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1247ecc9-9177-446b-b939-3d158d4d0cd0-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"1247ecc9-9177-446b-b939-3d158d4d0cd0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 27 13:35:23 crc kubenswrapper[4786]: I0127 13:35:23.092688 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kj5c\" (UniqueName: \"kubernetes.io/projected/1247ecc9-9177-446b-b939-3d158d4d0cd0-kube-api-access-4kj5c\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"1247ecc9-9177-446b-b939-3d158d4d0cd0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 27 13:35:23 crc kubenswrapper[4786]: I0127 13:35:23.093087 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1247ecc9-9177-446b-b939-3d158d4d0cd0-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"1247ecc9-9177-446b-b939-3d158d4d0cd0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 27 13:35:23 crc kubenswrapper[4786]: I0127 13:35:23.093148 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62afa32f-2cf3-4581-8db3-0a4dd0d70545-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"62afa32f-2cf3-4581-8db3-0a4dd0d70545\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 27 13:35:23 crc kubenswrapper[4786]: I0127 13:35:23.093460 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbxkm\" (UniqueName: \"kubernetes.io/projected/62afa32f-2cf3-4581-8db3-0a4dd0d70545-kube-api-access-nbxkm\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"62afa32f-2cf3-4581-8db3-0a4dd0d70545\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 27 13:35:23 crc kubenswrapper[4786]: I0127 13:35:23.104414 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1247ecc9-9177-446b-b939-3d158d4d0cd0-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"1247ecc9-9177-446b-b939-3d158d4d0cd0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 27 13:35:23 crc kubenswrapper[4786]: I0127 13:35:23.111849 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kj5c\" (UniqueName: \"kubernetes.io/projected/1247ecc9-9177-446b-b939-3d158d4d0cd0-kube-api-access-4kj5c\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"1247ecc9-9177-446b-b939-3d158d4d0cd0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 27 13:35:23 crc kubenswrapper[4786]: I0127 13:35:23.194570 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbxkm\" (UniqueName: \"kubernetes.io/projected/62afa32f-2cf3-4581-8db3-0a4dd0d70545-kube-api-access-nbxkm\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"62afa32f-2cf3-4581-8db3-0a4dd0d70545\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 27 13:35:23 crc kubenswrapper[4786]: I0127 13:35:23.194724 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62afa32f-2cf3-4581-8db3-0a4dd0d70545-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"62afa32f-2cf3-4581-8db3-0a4dd0d70545\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 27 13:35:23 crc kubenswrapper[4786]: I0127 13:35:23.198657 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62afa32f-2cf3-4581-8db3-0a4dd0d70545-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"62afa32f-2cf3-4581-8db3-0a4dd0d70545\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 27 13:35:23 crc kubenswrapper[4786]: I0127 13:35:23.219522 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbxkm\" (UniqueName: \"kubernetes.io/projected/62afa32f-2cf3-4581-8db3-0a4dd0d70545-kube-api-access-nbxkm\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"62afa32f-2cf3-4581-8db3-0a4dd0d70545\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 27 13:35:23 crc kubenswrapper[4786]: I0127 13:35:23.306576 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 27 13:35:23 crc kubenswrapper[4786]: I0127 13:35:23.316121 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 27 13:35:23 crc kubenswrapper[4786]: I0127 13:35:23.390946 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 27 13:35:23 crc kubenswrapper[4786]: W0127 13:35:23.392851 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01603e06_2096_4686_934f_59aade63c30d.slice/crio-eaac374789d6413aee73aba7f6f311ad4229b14540765940feae66563a619b69 WatchSource:0}: Error finding container eaac374789d6413aee73aba7f6f311ad4229b14540765940feae66563a619b69: Status 404 returned error can't find the container with id eaac374789d6413aee73aba7f6f311ad4229b14540765940feae66563a619b69 Jan 27 13:35:23 crc kubenswrapper[4786]: I0127 13:35:23.509095 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 27 13:35:23 crc kubenswrapper[4786]: W0127 13:35:23.513535 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51df781b_8436_4545_8e9a_c9770e35814b.slice/crio-be6ffe3cb50a6e089a50dc11ce2c4ead7ed3c314d7984feb8fa95ec729eae1a5 WatchSource:0}: Error finding container be6ffe3cb50a6e089a50dc11ce2c4ead7ed3c314d7984feb8fa95ec729eae1a5: Status 404 returned error can't find the container with id be6ffe3cb50a6e089a50dc11ce2c4ead7ed3c314d7984feb8fa95ec729eae1a5 Jan 27 13:35:23 crc kubenswrapper[4786]: W0127 13:35:23.779824 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1247ecc9_9177_446b_b939_3d158d4d0cd0.slice/crio-18f3c56c8cfea4079d580fe52c9c5d44de2dbfe10ac3ae31e7c6c5c8ec2a72da WatchSource:0}: Error finding container 18f3c56c8cfea4079d580fe52c9c5d44de2dbfe10ac3ae31e7c6c5c8ec2a72da: Status 404 returned error can't find the container with id 18f3c56c8cfea4079d580fe52c9c5d44de2dbfe10ac3ae31e7c6c5c8ec2a72da Jan 27 13:35:23 crc kubenswrapper[4786]: I0127 13:35:23.782497 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 27 13:35:23 crc kubenswrapper[4786]: I0127 13:35:23.849810 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 27 13:35:24 crc kubenswrapper[4786]: I0127 13:35:24.375674 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"1247ecc9-9177-446b-b939-3d158d4d0cd0","Type":"ContainerStarted","Data":"fd7331605674ef0bcca5c26e92d50aa1d9e1fc56b71de769e78852c825b29778"} Jan 27 13:35:24 crc kubenswrapper[4786]: I0127 13:35:24.375713 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"1247ecc9-9177-446b-b939-3d158d4d0cd0","Type":"ContainerStarted","Data":"18f3c56c8cfea4079d580fe52c9c5d44de2dbfe10ac3ae31e7c6c5c8ec2a72da"} Jan 27 13:35:24 crc kubenswrapper[4786]: I0127 13:35:24.375828 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 27 13:35:24 crc kubenswrapper[4786]: I0127 13:35:24.377366 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"51df781b-8436-4545-8e9a-c9770e35814b","Type":"ContainerStarted","Data":"d10ebdb967fbe75157a1f18af14fcbcc36ce9291fb1faef67b658443a7bb79ec"} Jan 27 13:35:24 crc kubenswrapper[4786]: I0127 13:35:24.377399 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"51df781b-8436-4545-8e9a-c9770e35814b","Type":"ContainerStarted","Data":"9cf5fbcd089a97b29f6eee2d896152e659b2c68a5cbcb84845abfbc5162bf015"} Jan 27 13:35:24 crc kubenswrapper[4786]: I0127 13:35:24.377414 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"51df781b-8436-4545-8e9a-c9770e35814b","Type":"ContainerStarted","Data":"be6ffe3cb50a6e089a50dc11ce2c4ead7ed3c314d7984feb8fa95ec729eae1a5"} Jan 27 13:35:24 crc kubenswrapper[4786]: I0127 13:35:24.379148 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"01603e06-2096-4686-934f-59aade63c30d","Type":"ContainerStarted","Data":"d52c4a275ff5a9925f6338aedc505ece806b57c1cd8be1e7c49af96bbb995ee5"} Jan 27 13:35:24 crc kubenswrapper[4786]: I0127 13:35:24.379173 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"01603e06-2096-4686-934f-59aade63c30d","Type":"ContainerStarted","Data":"20f42345c808e08cc17e9e6d741b5c56c9d151426e98ad97cb1aa623cd2995f1"} Jan 27 13:35:24 crc kubenswrapper[4786]: I0127 13:35:24.379197 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"01603e06-2096-4686-934f-59aade63c30d","Type":"ContainerStarted","Data":"eaac374789d6413aee73aba7f6f311ad4229b14540765940feae66563a619b69"} Jan 27 13:35:24 crc kubenswrapper[4786]: I0127 13:35:24.380394 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"62afa32f-2cf3-4581-8db3-0a4dd0d70545","Type":"ContainerStarted","Data":"9bcb9c71905a133eee1dff949e34ca22440258f1eab51d520f8aeceff6f562dd"} Jan 27 13:35:24 crc kubenswrapper[4786]: I0127 13:35:24.380425 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"62afa32f-2cf3-4581-8db3-0a4dd0d70545","Type":"ContainerStarted","Data":"35462ad291ad7c3fd6c94a3ae419240e1a385580703840571630beca19992bf4"} Jan 27 13:35:24 crc kubenswrapper[4786]: I0127 13:35:24.380537 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 27 13:35:24 crc kubenswrapper[4786]: I0127 13:35:24.396953 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podStartSLOduration=2.396931386 podStartE2EDuration="2.396931386s" podCreationTimestamp="2026-01-27 13:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:35:24.387054396 +0000 UTC m=+1707.597668515" watchObservedRunningTime="2026-01-27 13:35:24.396931386 +0000 UTC m=+1707.607545515" Jan 27 13:35:24 crc kubenswrapper[4786]: I0127 13:35:24.406045 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podStartSLOduration=2.406025085 podStartE2EDuration="2.406025085s" podCreationTimestamp="2026-01-27 13:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:35:24.401428479 +0000 UTC m=+1707.612042618" watchObservedRunningTime="2026-01-27 13:35:24.406025085 +0000 UTC m=+1707.616639204" Jan 27 13:35:24 crc kubenswrapper[4786]: I0127 13:35:24.426963 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-1" podStartSLOduration=2.426944707 podStartE2EDuration="2.426944707s" podCreationTimestamp="2026-01-27 13:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:35:24.420298135 +0000 UTC m=+1707.630912254" watchObservedRunningTime="2026-01-27 13:35:24.426944707 +0000 UTC m=+1707.637558826" Jan 27 13:35:32 crc kubenswrapper[4786]: I0127 13:35:32.935420 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:32 crc kubenswrapper[4786]: I0127 13:35:32.936212 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:32 crc kubenswrapper[4786]: I0127 13:35:32.975828 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:32 crc kubenswrapper[4786]: I0127 13:35:32.975887 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:33 crc kubenswrapper[4786]: I0127 13:35:33.339432 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 27 13:35:33 crc kubenswrapper[4786]: I0127 13:35:33.351321 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 27 13:35:33 crc kubenswrapper[4786]: I0127 13:35:33.362310 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-2" podStartSLOduration=11.362292141 podStartE2EDuration="11.362292141s" podCreationTimestamp="2026-01-27 13:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:35:24.449577525 +0000 UTC m=+1707.660191664" watchObservedRunningTime="2026-01-27 13:35:33.362292141 +0000 UTC m=+1716.572906250" Jan 27 13:35:33 crc kubenswrapper[4786]: I0127 13:35:33.976886 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="01603e06-2096-4686-934f-59aade63c30d" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.169:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.059812 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="01603e06-2096-4686-934f-59aade63c30d" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.169:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.059811 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="51df781b-8436-4545-8e9a-c9770e35814b" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.059878 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="51df781b-8436-4545-8e9a-c9770e35814b" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.170:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.488105 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.489658 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.499283 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.500899 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.511540 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.520688 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.551229 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.552686 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.558636 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.560048 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.578273 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.593764 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh7sh\" (UniqueName: \"kubernetes.io/projected/c946efaf-f2a6-48b6-a52c-3fd537fc15f4-kube-api-access-wh7sh\") pod \"nova-kuttl-scheduler-1\" (UID: \"c946efaf-f2a6-48b6-a52c-3fd537fc15f4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.593872 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c946efaf-f2a6-48b6-a52c-3fd537fc15f4-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"c946efaf-f2a6-48b6-a52c-3fd537fc15f4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.593976 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-544kh\" (UniqueName: \"kubernetes.io/projected/e2f7b9c2-401b-456a-93d3-cb99227f3a21-kube-api-access-544kh\") pod \"nova-kuttl-scheduler-2\" (UID: \"e2f7b9c2-401b-456a-93d3-cb99227f3a21\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.594010 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f7b9c2-401b-456a-93d3-cb99227f3a21-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"e2f7b9c2-401b-456a-93d3-cb99227f3a21\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.610834 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.696374 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df824f05-4250-4af1-a208-e4d1233d297f-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"df824f05-4250-4af1-a208-e4d1233d297f\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.696717 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/898828a7-a3cd-4455-b719-15d886f937a4-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"898828a7-a3cd-4455-b719-15d886f937a4\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.697050 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c946efaf-f2a6-48b6-a52c-3fd537fc15f4-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"c946efaf-f2a6-48b6-a52c-3fd537fc15f4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.697252 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwzmj\" (UniqueName: \"kubernetes.io/projected/898828a7-a3cd-4455-b719-15d886f937a4-kube-api-access-fwzmj\") pod \"nova-kuttl-metadata-1\" (UID: \"898828a7-a3cd-4455-b719-15d886f937a4\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.697431 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-544kh\" (UniqueName: \"kubernetes.io/projected/e2f7b9c2-401b-456a-93d3-cb99227f3a21-kube-api-access-544kh\") pod \"nova-kuttl-scheduler-2\" (UID: \"e2f7b9c2-401b-456a-93d3-cb99227f3a21\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.697523 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtdsj\" (UniqueName: \"kubernetes.io/projected/df824f05-4250-4af1-a208-e4d1233d297f-kube-api-access-jtdsj\") pod \"nova-kuttl-metadata-2\" (UID: \"df824f05-4250-4af1-a208-e4d1233d297f\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.697660 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f7b9c2-401b-456a-93d3-cb99227f3a21-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"e2f7b9c2-401b-456a-93d3-cb99227f3a21\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.697801 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898828a7-a3cd-4455-b719-15d886f937a4-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"898828a7-a3cd-4455-b719-15d886f937a4\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.697907 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df824f05-4250-4af1-a208-e4d1233d297f-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"df824f05-4250-4af1-a208-e4d1233d297f\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.698005 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh7sh\" (UniqueName: \"kubernetes.io/projected/c946efaf-f2a6-48b6-a52c-3fd537fc15f4-kube-api-access-wh7sh\") pod \"nova-kuttl-scheduler-1\" (UID: \"c946efaf-f2a6-48b6-a52c-3fd537fc15f4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.707443 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c946efaf-f2a6-48b6-a52c-3fd537fc15f4-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"c946efaf-f2a6-48b6-a52c-3fd537fc15f4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.711798 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f7b9c2-401b-456a-93d3-cb99227f3a21-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"e2f7b9c2-401b-456a-93d3-cb99227f3a21\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.722914 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh7sh\" (UniqueName: \"kubernetes.io/projected/c946efaf-f2a6-48b6-a52c-3fd537fc15f4-kube-api-access-wh7sh\") pod \"nova-kuttl-scheduler-1\" (UID: \"c946efaf-f2a6-48b6-a52c-3fd537fc15f4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.725183 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-544kh\" (UniqueName: \"kubernetes.io/projected/e2f7b9c2-401b-456a-93d3-cb99227f3a21-kube-api-access-544kh\") pod \"nova-kuttl-scheduler-2\" (UID: \"e2f7b9c2-401b-456a-93d3-cb99227f3a21\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.799772 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df824f05-4250-4af1-a208-e4d1233d297f-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"df824f05-4250-4af1-a208-e4d1233d297f\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.799827 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/898828a7-a3cd-4455-b719-15d886f937a4-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"898828a7-a3cd-4455-b719-15d886f937a4\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.799882 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwzmj\" (UniqueName: \"kubernetes.io/projected/898828a7-a3cd-4455-b719-15d886f937a4-kube-api-access-fwzmj\") pod \"nova-kuttl-metadata-1\" (UID: \"898828a7-a3cd-4455-b719-15d886f937a4\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.799953 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtdsj\" (UniqueName: \"kubernetes.io/projected/df824f05-4250-4af1-a208-e4d1233d297f-kube-api-access-jtdsj\") pod \"nova-kuttl-metadata-2\" (UID: \"df824f05-4250-4af1-a208-e4d1233d297f\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.800016 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898828a7-a3cd-4455-b719-15d886f937a4-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"898828a7-a3cd-4455-b719-15d886f937a4\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.800045 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df824f05-4250-4af1-a208-e4d1233d297f-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"df824f05-4250-4af1-a208-e4d1233d297f\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.800869 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df824f05-4250-4af1-a208-e4d1233d297f-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"df824f05-4250-4af1-a208-e4d1233d297f\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.800979 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/898828a7-a3cd-4455-b719-15d886f937a4-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"898828a7-a3cd-4455-b719-15d886f937a4\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.803760 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df824f05-4250-4af1-a208-e4d1233d297f-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"df824f05-4250-4af1-a208-e4d1233d297f\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.804266 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898828a7-a3cd-4455-b719-15d886f937a4-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"898828a7-a3cd-4455-b719-15d886f937a4\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.814915 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.816597 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwzmj\" (UniqueName: \"kubernetes.io/projected/898828a7-a3cd-4455-b719-15d886f937a4-kube-api-access-fwzmj\") pod \"nova-kuttl-metadata-1\" (UID: \"898828a7-a3cd-4455-b719-15d886f937a4\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.822656 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtdsj\" (UniqueName: \"kubernetes.io/projected/df824f05-4250-4af1-a208-e4d1233d297f-kube-api-access-jtdsj\") pod \"nova-kuttl-metadata-2\" (UID: \"df824f05-4250-4af1-a208-e4d1233d297f\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.830397 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.875870 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:34 crc kubenswrapper[4786]: I0127 13:35:34.894006 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.362150 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.437640 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.466375 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:35:35 crc kubenswrapper[4786]: E0127 13:35:35.466676 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.486807 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"e2f7b9c2-401b-456a-93d3-cb99227f3a21","Type":"ContainerStarted","Data":"d218410eac23febe72ef7c54b42f65b629adbf8655a50c20c02656b521660329"} Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.487051 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"c946efaf-f2a6-48b6-a52c-3fd537fc15f4","Type":"ContainerStarted","Data":"68ee64ade851626fd376fe1fe22cc14542272ff2420bc667acfb29253ee61adb"} Jan 27 13:35:35 crc kubenswrapper[4786]: W0127 13:35:35.519121 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod898828a7_a3cd_4455_b719_15d886f937a4.slice/crio-490eb44dbf02a444c8a4a8b67452283752da4c9765e63b719e64f172d8945134 WatchSource:0}: Error finding container 490eb44dbf02a444c8a4a8b67452283752da4c9765e63b719e64f172d8945134: Status 404 returned error can't find the container with id 490eb44dbf02a444c8a4a8b67452283752da4c9765e63b719e64f172d8945134 Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.522162 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.567287 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.569850 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.594139 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.608204 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.626850 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.657907 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.674754 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.744343 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c584017-f51e-4001-b5b6-dacefc2d7658-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"5c584017-f51e-4001-b5b6-dacefc2d7658\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.744393 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f28z8\" (UniqueName: \"kubernetes.io/projected/602c933e-40d5-42c8-8319-960566e74b61-kube-api-access-f28z8\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"602c933e-40d5-42c8-8319-960566e74b61\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.744454 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602c933e-40d5-42c8-8319-960566e74b61-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"602c933e-40d5-42c8-8319-960566e74b61\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.744681 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h82mj\" (UniqueName: \"kubernetes.io/projected/5c584017-f51e-4001-b5b6-dacefc2d7658-kube-api-access-h82mj\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"5c584017-f51e-4001-b5b6-dacefc2d7658\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.846249 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c584017-f51e-4001-b5b6-dacefc2d7658-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"5c584017-f51e-4001-b5b6-dacefc2d7658\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.846293 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f28z8\" (UniqueName: \"kubernetes.io/projected/602c933e-40d5-42c8-8319-960566e74b61-kube-api-access-f28z8\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"602c933e-40d5-42c8-8319-960566e74b61\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.846331 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602c933e-40d5-42c8-8319-960566e74b61-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"602c933e-40d5-42c8-8319-960566e74b61\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.846373 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h82mj\" (UniqueName: \"kubernetes.io/projected/5c584017-f51e-4001-b5b6-dacefc2d7658-kube-api-access-h82mj\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"5c584017-f51e-4001-b5b6-dacefc2d7658\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.864400 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602c933e-40d5-42c8-8319-960566e74b61-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"602c933e-40d5-42c8-8319-960566e74b61\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.864440 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c584017-f51e-4001-b5b6-dacefc2d7658-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"5c584017-f51e-4001-b5b6-dacefc2d7658\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.885476 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f28z8\" (UniqueName: \"kubernetes.io/projected/602c933e-40d5-42c8-8319-960566e74b61-kube-api-access-f28z8\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"602c933e-40d5-42c8-8319-960566e74b61\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.886051 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h82mj\" (UniqueName: \"kubernetes.io/projected/5c584017-f51e-4001-b5b6-dacefc2d7658-kube-api-access-h82mj\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"5c584017-f51e-4001-b5b6-dacefc2d7658\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.918436 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 27 13:35:35 crc kubenswrapper[4786]: I0127 13:35:35.930321 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 27 13:35:36 crc kubenswrapper[4786]: I0127 13:35:36.424212 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 27 13:35:36 crc kubenswrapper[4786]: I0127 13:35:36.432990 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 27 13:35:36 crc kubenswrapper[4786]: I0127 13:35:36.483870 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"e2f7b9c2-401b-456a-93d3-cb99227f3a21","Type":"ContainerStarted","Data":"7250f0b735f220490a3146ab284737aeb35b0cac638909eb25e670eed4da724c"} Jan 27 13:35:36 crc kubenswrapper[4786]: I0127 13:35:36.486142 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"898828a7-a3cd-4455-b719-15d886f937a4","Type":"ContainerStarted","Data":"a07a605c50a482deb690ae43ecf1fd00bda5615e0bac9020ad7bc5ae5b8ab4b4"} Jan 27 13:35:36 crc kubenswrapper[4786]: I0127 13:35:36.486189 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"898828a7-a3cd-4455-b719-15d886f937a4","Type":"ContainerStarted","Data":"240fd09a7d3dfde135c79d9f3995fc5fc4eca40aed0eed97d4a66c44eef937fc"} Jan 27 13:35:36 crc kubenswrapper[4786]: I0127 13:35:36.486203 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"898828a7-a3cd-4455-b719-15d886f937a4","Type":"ContainerStarted","Data":"490eb44dbf02a444c8a4a8b67452283752da4c9765e63b719e64f172d8945134"} Jan 27 13:35:36 crc kubenswrapper[4786]: I0127 13:35:36.487763 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"c946efaf-f2a6-48b6-a52c-3fd537fc15f4","Type":"ContainerStarted","Data":"11168003ef9fa6174893781aaa1cc3cd9b90e1f2f814223e83c0437507be924b"} Jan 27 13:35:36 crc kubenswrapper[4786]: I0127 13:35:36.489853 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"df824f05-4250-4af1-a208-e4d1233d297f","Type":"ContainerStarted","Data":"63ca42c750fdbba36aa46d43b9d5a61bfcfd53c045751428e0dbb40e680d9498"} Jan 27 13:35:36 crc kubenswrapper[4786]: I0127 13:35:36.489902 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"df824f05-4250-4af1-a208-e4d1233d297f","Type":"ContainerStarted","Data":"e0f5521164e86e2fa4cdadfa7fdf83365fde0915db476192c8e042c10d76b01c"} Jan 27 13:35:36 crc kubenswrapper[4786]: I0127 13:35:36.489915 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"df824f05-4250-4af1-a208-e4d1233d297f","Type":"ContainerStarted","Data":"466371f5a457b5f68ddb23ac2988b0349d617a749f80cb4bb288f04ba80ab37d"} Jan 27 13:35:36 crc kubenswrapper[4786]: I0127 13:35:36.491179 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"5c584017-f51e-4001-b5b6-dacefc2d7658","Type":"ContainerStarted","Data":"3b3bf9fb85f3c4b77081efc160a801ab41889ca3d9b8bf8ea5998ccfaa7e5a4c"} Jan 27 13:35:36 crc kubenswrapper[4786]: I0127 13:35:36.512815 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-2" podStartSLOduration=2.512798611 podStartE2EDuration="2.512798611s" podCreationTimestamp="2026-01-27 13:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:35:36.49810921 +0000 UTC m=+1719.708723329" watchObservedRunningTime="2026-01-27 13:35:36.512798611 +0000 UTC m=+1719.723412730" Jan 27 13:35:36 crc kubenswrapper[4786]: I0127 13:35:36.536630 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-1" podStartSLOduration=2.536590302 podStartE2EDuration="2.536590302s" podCreationTimestamp="2026-01-27 13:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:35:36.517365146 +0000 UTC m=+1719.727979265" watchObservedRunningTime="2026-01-27 13:35:36.536590302 +0000 UTC m=+1719.747204421" Jan 27 13:35:36 crc kubenswrapper[4786]: I0127 13:35:36.542541 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-2" podStartSLOduration=2.542525474 podStartE2EDuration="2.542525474s" podCreationTimestamp="2026-01-27 13:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:35:36.542504123 +0000 UTC m=+1719.753118242" watchObservedRunningTime="2026-01-27 13:35:36.542525474 +0000 UTC m=+1719.753139583" Jan 27 13:35:36 crc kubenswrapper[4786]: I0127 13:35:36.562237 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-1" podStartSLOduration=2.562216722 podStartE2EDuration="2.562216722s" podCreationTimestamp="2026-01-27 13:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:35:36.558015687 +0000 UTC m=+1719.768629826" watchObservedRunningTime="2026-01-27 13:35:36.562216722 +0000 UTC m=+1719.772830851" Jan 27 13:35:37 crc kubenswrapper[4786]: I0127 13:35:37.505371 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"5c584017-f51e-4001-b5b6-dacefc2d7658","Type":"ContainerStarted","Data":"e79fa17f4f1d237ed33d0bb3a19d70bcdfacfbd92d16a884a04a025329b17e3b"} Jan 27 13:35:37 crc kubenswrapper[4786]: I0127 13:35:37.505791 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 27 13:35:37 crc kubenswrapper[4786]: I0127 13:35:37.507398 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"602c933e-40d5-42c8-8319-960566e74b61","Type":"ContainerStarted","Data":"863a49218bc0276520fd165e13d85605ed46d0e8629f4c1777483ebe0c6fc71b"} Jan 27 13:35:37 crc kubenswrapper[4786]: I0127 13:35:37.507432 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"602c933e-40d5-42c8-8319-960566e74b61","Type":"ContainerStarted","Data":"015ebf81fe6b06c23a909f95f6f6a7364b70de86eaff437fe3449e53b6b4ef33"} Jan 27 13:35:37 crc kubenswrapper[4786]: I0127 13:35:37.507446 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 27 13:35:37 crc kubenswrapper[4786]: I0127 13:35:37.521300 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" podStartSLOduration=2.52123566 podStartE2EDuration="2.52123566s" podCreationTimestamp="2026-01-27 13:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:35:37.519994467 +0000 UTC m=+1720.730608596" watchObservedRunningTime="2026-01-27 13:35:37.52123566 +0000 UTC m=+1720.731849779" Jan 27 13:35:37 crc kubenswrapper[4786]: I0127 13:35:37.543895 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" podStartSLOduration=2.543876939 podStartE2EDuration="2.543876939s" podCreationTimestamp="2026-01-27 13:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:35:37.539011897 +0000 UTC m=+1720.749626006" watchObservedRunningTime="2026-01-27 13:35:37.543876939 +0000 UTC m=+1720.754491058" Jan 27 13:35:39 crc kubenswrapper[4786]: I0127 13:35:39.815360 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 27 13:35:39 crc kubenswrapper[4786]: I0127 13:35:39.831497 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 27 13:35:39 crc kubenswrapper[4786]: I0127 13:35:39.877371 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:39 crc kubenswrapper[4786]: I0127 13:35:39.877427 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:39 crc kubenswrapper[4786]: I0127 13:35:39.894890 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:39 crc kubenswrapper[4786]: I0127 13:35:39.894950 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:42 crc kubenswrapper[4786]: I0127 13:35:42.940000 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:42 crc kubenswrapper[4786]: I0127 13:35:42.940085 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:42 crc kubenswrapper[4786]: I0127 13:35:42.940825 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:42 crc kubenswrapper[4786]: I0127 13:35:42.941172 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:42 crc kubenswrapper[4786]: I0127 13:35:42.942991 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:42 crc kubenswrapper[4786]: I0127 13:35:42.944631 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:35:42 crc kubenswrapper[4786]: I0127 13:35:42.982719 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:42 crc kubenswrapper[4786]: I0127 13:35:42.983272 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:42 crc kubenswrapper[4786]: I0127 13:35:42.985789 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:42 crc kubenswrapper[4786]: I0127 13:35:42.996436 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:43 crc kubenswrapper[4786]: I0127 13:35:43.550106 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:43 crc kubenswrapper[4786]: I0127 13:35:43.552823 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:35:44 crc kubenswrapper[4786]: I0127 13:35:44.815800 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 27 13:35:44 crc kubenswrapper[4786]: I0127 13:35:44.831162 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 27 13:35:44 crc kubenswrapper[4786]: I0127 13:35:44.837960 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 27 13:35:44 crc kubenswrapper[4786]: I0127 13:35:44.855220 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 27 13:35:44 crc kubenswrapper[4786]: I0127 13:35:44.877598 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:44 crc kubenswrapper[4786]: I0127 13:35:44.877905 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:44 crc kubenswrapper[4786]: I0127 13:35:44.895860 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:44 crc kubenswrapper[4786]: I0127 13:35:44.896224 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:45 crc kubenswrapper[4786]: I0127 13:35:45.591083 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 27 13:35:45 crc kubenswrapper[4786]: I0127 13:35:45.593946 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 27 13:35:45 crc kubenswrapper[4786]: I0127 13:35:45.944949 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 27 13:35:45 crc kubenswrapper[4786]: I0127 13:35:45.970909 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 27 13:35:46 crc kubenswrapper[4786]: I0127 13:35:46.042789 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="898828a7-a3cd-4455-b719-15d886f937a4" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.175:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:35:46 crc kubenswrapper[4786]: I0127 13:35:46.042789 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="df824f05-4250-4af1-a208-e4d1233d297f" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.176:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:35:46 crc kubenswrapper[4786]: I0127 13:35:46.042846 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="898828a7-a3cd-4455-b719-15d886f937a4" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.175:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:35:46 crc kubenswrapper[4786]: I0127 13:35:46.042876 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="df824f05-4250-4af1-a208-e4d1233d297f" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.176:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:35:50 crc kubenswrapper[4786]: I0127 13:35:50.465100 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:35:50 crc kubenswrapper[4786]: E0127 13:35:50.465686 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:35:54 crc kubenswrapper[4786]: I0127 13:35:54.879595 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:54 crc kubenswrapper[4786]: I0127 13:35:54.882157 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:54 crc kubenswrapper[4786]: I0127 13:35:54.882861 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:54 crc kubenswrapper[4786]: I0127 13:35:54.897653 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:54 crc kubenswrapper[4786]: I0127 13:35:54.900372 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:54 crc kubenswrapper[4786]: I0127 13:35:54.900585 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:55 crc kubenswrapper[4786]: I0127 13:35:55.656422 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:35:55 crc kubenswrapper[4786]: I0127 13:35:55.657327 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:35:56 crc kubenswrapper[4786]: I0127 13:35:56.776422 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 27 13:35:56 crc kubenswrapper[4786]: I0127 13:35:56.776696 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="01603e06-2096-4686-934f-59aade63c30d" containerName="nova-kuttl-api-log" containerID="cri-o://20f42345c808e08cc17e9e6d741b5c56c9d151426e98ad97cb1aa623cd2995f1" gracePeriod=30 Jan 27 13:35:56 crc kubenswrapper[4786]: I0127 13:35:56.776806 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="01603e06-2096-4686-934f-59aade63c30d" containerName="nova-kuttl-api-api" containerID="cri-o://d52c4a275ff5a9925f6338aedc505ece806b57c1cd8be1e7c49af96bbb995ee5" gracePeriod=30 Jan 27 13:35:56 crc kubenswrapper[4786]: I0127 13:35:56.793196 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 27 13:35:56 crc kubenswrapper[4786]: I0127 13:35:56.793922 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="51df781b-8436-4545-8e9a-c9770e35814b" containerName="nova-kuttl-api-api" containerID="cri-o://d10ebdb967fbe75157a1f18af14fcbcc36ce9291fb1faef67b658443a7bb79ec" gracePeriod=30 Jan 27 13:35:56 crc kubenswrapper[4786]: I0127 13:35:56.793865 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="51df781b-8436-4545-8e9a-c9770e35814b" containerName="nova-kuttl-api-log" containerID="cri-o://9cf5fbcd089a97b29f6eee2d896152e659b2c68a5cbcb84845abfbc5162bf015" gracePeriod=30 Jan 27 13:35:57 crc kubenswrapper[4786]: I0127 13:35:57.090448 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 27 13:35:57 crc kubenswrapper[4786]: I0127 13:35:57.090745 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podUID="1247ecc9-9177-446b-b939-3d158d4d0cd0" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://fd7331605674ef0bcca5c26e92d50aa1d9e1fc56b71de769e78852c825b29778" gracePeriod=30 Jan 27 13:35:57 crc kubenswrapper[4786]: I0127 13:35:57.098362 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 27 13:35:57 crc kubenswrapper[4786]: I0127 13:35:57.098564 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podUID="62afa32f-2cf3-4581-8db3-0a4dd0d70545" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://9bcb9c71905a133eee1dff949e34ca22440258f1eab51d520f8aeceff6f562dd" gracePeriod=30 Jan 27 13:35:57 crc kubenswrapper[4786]: I0127 13:35:57.670400 4786 generic.go:334] "Generic (PLEG): container finished" podID="51df781b-8436-4545-8e9a-c9770e35814b" containerID="9cf5fbcd089a97b29f6eee2d896152e659b2c68a5cbcb84845abfbc5162bf015" exitCode=143 Jan 27 13:35:57 crc kubenswrapper[4786]: I0127 13:35:57.670482 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"51df781b-8436-4545-8e9a-c9770e35814b","Type":"ContainerDied","Data":"9cf5fbcd089a97b29f6eee2d896152e659b2c68a5cbcb84845abfbc5162bf015"} Jan 27 13:35:57 crc kubenswrapper[4786]: I0127 13:35:57.673526 4786 generic.go:334] "Generic (PLEG): container finished" podID="01603e06-2096-4686-934f-59aade63c30d" containerID="20f42345c808e08cc17e9e6d741b5c56c9d151426e98ad97cb1aa623cd2995f1" exitCode=143 Jan 27 13:35:57 crc kubenswrapper[4786]: I0127 13:35:57.673960 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"01603e06-2096-4686-934f-59aade63c30d","Type":"ContainerDied","Data":"20f42345c808e08cc17e9e6d741b5c56c9d151426e98ad97cb1aa623cd2995f1"} Jan 27 13:35:58 crc kubenswrapper[4786]: E0127 13:35:58.308099 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd7331605674ef0bcca5c26e92d50aa1d9e1fc56b71de769e78852c825b29778" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:35:58 crc kubenswrapper[4786]: E0127 13:35:58.309538 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd7331605674ef0bcca5c26e92d50aa1d9e1fc56b71de769e78852c825b29778" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:35:58 crc kubenswrapper[4786]: E0127 13:35:58.310560 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd7331605674ef0bcca5c26e92d50aa1d9e1fc56b71de769e78852c825b29778" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:35:58 crc kubenswrapper[4786]: E0127 13:35:58.310623 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podUID="1247ecc9-9177-446b-b939-3d158d4d0cd0" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:35:58 crc kubenswrapper[4786]: E0127 13:35:58.317715 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9bcb9c71905a133eee1dff949e34ca22440258f1eab51d520f8aeceff6f562dd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:35:58 crc kubenswrapper[4786]: E0127 13:35:58.319094 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9bcb9c71905a133eee1dff949e34ca22440258f1eab51d520f8aeceff6f562dd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:35:58 crc kubenswrapper[4786]: E0127 13:35:58.320294 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9bcb9c71905a133eee1dff949e34ca22440258f1eab51d520f8aeceff6f562dd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:35:58 crc kubenswrapper[4786]: E0127 13:35:58.320338 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podUID="62afa32f-2cf3-4581-8db3-0a4dd0d70545" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:35:58 crc kubenswrapper[4786]: I0127 13:35:58.587132 4786 scope.go:117] "RemoveContainer" containerID="c9c4e00ba48fc676326785afed4dac85596d012dbdebfd5f8cc0fe3c13969aff" Jan 27 13:35:58 crc kubenswrapper[4786]: I0127 13:35:58.605327 4786 scope.go:117] "RemoveContainer" containerID="f371b29f3df7547a3108a17c8c909bfcd52de5dd36a6e128b39aec6ccff71222" Jan 27 13:35:58 crc kubenswrapper[4786]: I0127 13:35:58.648785 4786 scope.go:117] "RemoveContainer" containerID="4a5d88af7dc7523f178ff882c98200da2ee5c08c3b0172f9da010d5dfac91dd4" Jan 27 13:35:58 crc kubenswrapper[4786]: I0127 13:35:58.694589 4786 scope.go:117] "RemoveContainer" containerID="cd6edd288c29486bdc899a5dc79ca8e7875d10a8c6557a16890ad0956988dc46" Jan 27 13:35:58 crc kubenswrapper[4786]: I0127 13:35:58.713990 4786 scope.go:117] "RemoveContainer" containerID="28fff4ed986210774c27b56df470b552b21a7df5c245dde34411edc671a76fc0" Jan 27 13:35:58 crc kubenswrapper[4786]: I0127 13:35:58.756112 4786 scope.go:117] "RemoveContainer" containerID="cc2ad4110ffb34287f172b11672e3cb0955c32f8af0f56fc4618864c690e918f" Jan 27 13:35:58 crc kubenswrapper[4786]: I0127 13:35:58.787769 4786 scope.go:117] "RemoveContainer" containerID="0a67791a9ef2bfaafbd8aede20528025945211b4524995c8c16c92a89c7bd83a" Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.367259 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.495395 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51df781b-8436-4545-8e9a-c9770e35814b-logs\") pod \"51df781b-8436-4545-8e9a-c9770e35814b\" (UID: \"51df781b-8436-4545-8e9a-c9770e35814b\") " Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.495597 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcfhb\" (UniqueName: \"kubernetes.io/projected/51df781b-8436-4545-8e9a-c9770e35814b-kube-api-access-dcfhb\") pod \"51df781b-8436-4545-8e9a-c9770e35814b\" (UID: \"51df781b-8436-4545-8e9a-c9770e35814b\") " Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.495707 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51df781b-8436-4545-8e9a-c9770e35814b-config-data\") pod \"51df781b-8436-4545-8e9a-c9770e35814b\" (UID: \"51df781b-8436-4545-8e9a-c9770e35814b\") " Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.496247 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51df781b-8436-4545-8e9a-c9770e35814b-logs" (OuterVolumeSpecName: "logs") pod "51df781b-8436-4545-8e9a-c9770e35814b" (UID: "51df781b-8436-4545-8e9a-c9770e35814b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.500763 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51df781b-8436-4545-8e9a-c9770e35814b-kube-api-access-dcfhb" (OuterVolumeSpecName: "kube-api-access-dcfhb") pod "51df781b-8436-4545-8e9a-c9770e35814b" (UID: "51df781b-8436-4545-8e9a-c9770e35814b"). InnerVolumeSpecName "kube-api-access-dcfhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.518040 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51df781b-8436-4545-8e9a-c9770e35814b-config-data" (OuterVolumeSpecName: "config-data") pod "51df781b-8436-4545-8e9a-c9770e35814b" (UID: "51df781b-8436-4545-8e9a-c9770e35814b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.598075 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcfhb\" (UniqueName: \"kubernetes.io/projected/51df781b-8436-4545-8e9a-c9770e35814b-kube-api-access-dcfhb\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.598112 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51df781b-8436-4545-8e9a-c9770e35814b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.598124 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51df781b-8436-4545-8e9a-c9770e35814b-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.703800 4786 generic.go:334] "Generic (PLEG): container finished" podID="51df781b-8436-4545-8e9a-c9770e35814b" containerID="d10ebdb967fbe75157a1f18af14fcbcc36ce9291fb1faef67b658443a7bb79ec" exitCode=0 Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.703854 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.703863 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"51df781b-8436-4545-8e9a-c9770e35814b","Type":"ContainerDied","Data":"d10ebdb967fbe75157a1f18af14fcbcc36ce9291fb1faef67b658443a7bb79ec"} Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.703908 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"51df781b-8436-4545-8e9a-c9770e35814b","Type":"ContainerDied","Data":"be6ffe3cb50a6e089a50dc11ce2c4ead7ed3c314d7984feb8fa95ec729eae1a5"} Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.703931 4786 scope.go:117] "RemoveContainer" containerID="d10ebdb967fbe75157a1f18af14fcbcc36ce9291fb1faef67b658443a7bb79ec" Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.738598 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.748848 4786 scope.go:117] "RemoveContainer" containerID="9cf5fbcd089a97b29f6eee2d896152e659b2c68a5cbcb84845abfbc5162bf015" Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.750256 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.769841 4786 scope.go:117] "RemoveContainer" containerID="d10ebdb967fbe75157a1f18af14fcbcc36ce9291fb1faef67b658443a7bb79ec" Jan 27 13:36:00 crc kubenswrapper[4786]: E0127 13:36:00.770818 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d10ebdb967fbe75157a1f18af14fcbcc36ce9291fb1faef67b658443a7bb79ec\": container with ID starting with d10ebdb967fbe75157a1f18af14fcbcc36ce9291fb1faef67b658443a7bb79ec not found: ID does not exist" containerID="d10ebdb967fbe75157a1f18af14fcbcc36ce9291fb1faef67b658443a7bb79ec" Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.770865 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d10ebdb967fbe75157a1f18af14fcbcc36ce9291fb1faef67b658443a7bb79ec"} err="failed to get container status \"d10ebdb967fbe75157a1f18af14fcbcc36ce9291fb1faef67b658443a7bb79ec\": rpc error: code = NotFound desc = could not find container \"d10ebdb967fbe75157a1f18af14fcbcc36ce9291fb1faef67b658443a7bb79ec\": container with ID starting with d10ebdb967fbe75157a1f18af14fcbcc36ce9291fb1faef67b658443a7bb79ec not found: ID does not exist" Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.770890 4786 scope.go:117] "RemoveContainer" containerID="9cf5fbcd089a97b29f6eee2d896152e659b2c68a5cbcb84845abfbc5162bf015" Jan 27 13:36:00 crc kubenswrapper[4786]: E0127 13:36:00.771533 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cf5fbcd089a97b29f6eee2d896152e659b2c68a5cbcb84845abfbc5162bf015\": container with ID starting with 9cf5fbcd089a97b29f6eee2d896152e659b2c68a5cbcb84845abfbc5162bf015 not found: ID does not exist" containerID="9cf5fbcd089a97b29f6eee2d896152e659b2c68a5cbcb84845abfbc5162bf015" Jan 27 13:36:00 crc kubenswrapper[4786]: I0127 13:36:00.771566 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf5fbcd089a97b29f6eee2d896152e659b2c68a5cbcb84845abfbc5162bf015"} err="failed to get container status \"9cf5fbcd089a97b29f6eee2d896152e659b2c68a5cbcb84845abfbc5162bf015\": rpc error: code = NotFound desc = could not find container \"9cf5fbcd089a97b29f6eee2d896152e659b2c68a5cbcb84845abfbc5162bf015\": container with ID starting with 9cf5fbcd089a97b29f6eee2d896152e659b2c68a5cbcb84845abfbc5162bf015 not found: ID does not exist" Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.474855 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51df781b-8436-4545-8e9a-c9770e35814b" path="/var/lib/kubelet/pods/51df781b-8436-4545-8e9a-c9770e35814b/volumes" Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.508810 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.621236 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz4xs\" (UniqueName: \"kubernetes.io/projected/01603e06-2096-4686-934f-59aade63c30d-kube-api-access-vz4xs\") pod \"01603e06-2096-4686-934f-59aade63c30d\" (UID: \"01603e06-2096-4686-934f-59aade63c30d\") " Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.621288 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01603e06-2096-4686-934f-59aade63c30d-logs\") pod \"01603e06-2096-4686-934f-59aade63c30d\" (UID: \"01603e06-2096-4686-934f-59aade63c30d\") " Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.621334 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01603e06-2096-4686-934f-59aade63c30d-config-data\") pod \"01603e06-2096-4686-934f-59aade63c30d\" (UID: \"01603e06-2096-4686-934f-59aade63c30d\") " Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.623217 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01603e06-2096-4686-934f-59aade63c30d-logs" (OuterVolumeSpecName: "logs") pod "01603e06-2096-4686-934f-59aade63c30d" (UID: "01603e06-2096-4686-934f-59aade63c30d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.628684 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01603e06-2096-4686-934f-59aade63c30d-kube-api-access-vz4xs" (OuterVolumeSpecName: "kube-api-access-vz4xs") pod "01603e06-2096-4686-934f-59aade63c30d" (UID: "01603e06-2096-4686-934f-59aade63c30d"). InnerVolumeSpecName "kube-api-access-vz4xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.655094 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01603e06-2096-4686-934f-59aade63c30d-config-data" (OuterVolumeSpecName: "config-data") pod "01603e06-2096-4686-934f-59aade63c30d" (UID: "01603e06-2096-4686-934f-59aade63c30d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.719669 4786 generic.go:334] "Generic (PLEG): container finished" podID="01603e06-2096-4686-934f-59aade63c30d" containerID="d52c4a275ff5a9925f6338aedc505ece806b57c1cd8be1e7c49af96bbb995ee5" exitCode=0 Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.719721 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"01603e06-2096-4686-934f-59aade63c30d","Type":"ContainerDied","Data":"d52c4a275ff5a9925f6338aedc505ece806b57c1cd8be1e7c49af96bbb995ee5"} Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.719756 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"01603e06-2096-4686-934f-59aade63c30d","Type":"ContainerDied","Data":"eaac374789d6413aee73aba7f6f311ad4229b14540765940feae66563a619b69"} Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.719766 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.719783 4786 scope.go:117] "RemoveContainer" containerID="d52c4a275ff5a9925f6338aedc505ece806b57c1cd8be1e7c49af96bbb995ee5" Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.724196 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz4xs\" (UniqueName: \"kubernetes.io/projected/01603e06-2096-4686-934f-59aade63c30d-kube-api-access-vz4xs\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.724227 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01603e06-2096-4686-934f-59aade63c30d-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.724241 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01603e06-2096-4686-934f-59aade63c30d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.756745 4786 scope.go:117] "RemoveContainer" containerID="20f42345c808e08cc17e9e6d741b5c56c9d151426e98ad97cb1aa623cd2995f1" Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.757049 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.769609 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.786797 4786 scope.go:117] "RemoveContainer" containerID="d52c4a275ff5a9925f6338aedc505ece806b57c1cd8be1e7c49af96bbb995ee5" Jan 27 13:36:01 crc kubenswrapper[4786]: E0127 13:36:01.787398 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d52c4a275ff5a9925f6338aedc505ece806b57c1cd8be1e7c49af96bbb995ee5\": container with ID starting with d52c4a275ff5a9925f6338aedc505ece806b57c1cd8be1e7c49af96bbb995ee5 not found: ID does not exist" containerID="d52c4a275ff5a9925f6338aedc505ece806b57c1cd8be1e7c49af96bbb995ee5" Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.787478 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d52c4a275ff5a9925f6338aedc505ece806b57c1cd8be1e7c49af96bbb995ee5"} err="failed to get container status \"d52c4a275ff5a9925f6338aedc505ece806b57c1cd8be1e7c49af96bbb995ee5\": rpc error: code = NotFound desc = could not find container \"d52c4a275ff5a9925f6338aedc505ece806b57c1cd8be1e7c49af96bbb995ee5\": container with ID starting with d52c4a275ff5a9925f6338aedc505ece806b57c1cd8be1e7c49af96bbb995ee5 not found: ID does not exist" Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.787547 4786 scope.go:117] "RemoveContainer" containerID="20f42345c808e08cc17e9e6d741b5c56c9d151426e98ad97cb1aa623cd2995f1" Jan 27 13:36:01 crc kubenswrapper[4786]: E0127 13:36:01.788022 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20f42345c808e08cc17e9e6d741b5c56c9d151426e98ad97cb1aa623cd2995f1\": container with ID starting with 20f42345c808e08cc17e9e6d741b5c56c9d151426e98ad97cb1aa623cd2995f1 not found: ID does not exist" containerID="20f42345c808e08cc17e9e6d741b5c56c9d151426e98ad97cb1aa623cd2995f1" Jan 27 13:36:01 crc kubenswrapper[4786]: I0127 13:36:01.788055 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20f42345c808e08cc17e9e6d741b5c56c9d151426e98ad97cb1aa623cd2995f1"} err="failed to get container status \"20f42345c808e08cc17e9e6d741b5c56c9d151426e98ad97cb1aa623cd2995f1\": rpc error: code = NotFound desc = could not find container \"20f42345c808e08cc17e9e6d741b5c56c9d151426e98ad97cb1aa623cd2995f1\": container with ID starting with 20f42345c808e08cc17e9e6d741b5c56c9d151426e98ad97cb1aa623cd2995f1 not found: ID does not exist" Jan 27 13:36:02 crc kubenswrapper[4786]: I0127 13:36:02.465013 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:36:02 crc kubenswrapper[4786]: E0127 13:36:02.465355 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:36:02 crc kubenswrapper[4786]: I0127 13:36:02.735244 4786 generic.go:334] "Generic (PLEG): container finished" podID="1247ecc9-9177-446b-b939-3d158d4d0cd0" containerID="fd7331605674ef0bcca5c26e92d50aa1d9e1fc56b71de769e78852c825b29778" exitCode=0 Jan 27 13:36:02 crc kubenswrapper[4786]: I0127 13:36:02.735720 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"1247ecc9-9177-446b-b939-3d158d4d0cd0","Type":"ContainerDied","Data":"fd7331605674ef0bcca5c26e92d50aa1d9e1fc56b71de769e78852c825b29778"} Jan 27 13:36:02 crc kubenswrapper[4786]: I0127 13:36:02.736034 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"1247ecc9-9177-446b-b939-3d158d4d0cd0","Type":"ContainerDied","Data":"18f3c56c8cfea4079d580fe52c9c5d44de2dbfe10ac3ae31e7c6c5c8ec2a72da"} Jan 27 13:36:02 crc kubenswrapper[4786]: I0127 13:36:02.736339 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18f3c56c8cfea4079d580fe52c9c5d44de2dbfe10ac3ae31e7c6c5c8ec2a72da" Jan 27 13:36:02 crc kubenswrapper[4786]: I0127 13:36:02.739672 4786 generic.go:334] "Generic (PLEG): container finished" podID="62afa32f-2cf3-4581-8db3-0a4dd0d70545" containerID="9bcb9c71905a133eee1dff949e34ca22440258f1eab51d520f8aeceff6f562dd" exitCode=0 Jan 27 13:36:02 crc kubenswrapper[4786]: I0127 13:36:02.739749 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"62afa32f-2cf3-4581-8db3-0a4dd0d70545","Type":"ContainerDied","Data":"9bcb9c71905a133eee1dff949e34ca22440258f1eab51d520f8aeceff6f562dd"} Jan 27 13:36:02 crc kubenswrapper[4786]: I0127 13:36:02.771427 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 27 13:36:02 crc kubenswrapper[4786]: I0127 13:36:02.944917 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1247ecc9-9177-446b-b939-3d158d4d0cd0-config-data\") pod \"1247ecc9-9177-446b-b939-3d158d4d0cd0\" (UID: \"1247ecc9-9177-446b-b939-3d158d4d0cd0\") " Jan 27 13:36:02 crc kubenswrapper[4786]: I0127 13:36:02.945200 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kj5c\" (UniqueName: \"kubernetes.io/projected/1247ecc9-9177-446b-b939-3d158d4d0cd0-kube-api-access-4kj5c\") pod \"1247ecc9-9177-446b-b939-3d158d4d0cd0\" (UID: \"1247ecc9-9177-446b-b939-3d158d4d0cd0\") " Jan 27 13:36:02 crc kubenswrapper[4786]: I0127 13:36:02.953278 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1247ecc9-9177-446b-b939-3d158d4d0cd0-kube-api-access-4kj5c" (OuterVolumeSpecName: "kube-api-access-4kj5c") pod "1247ecc9-9177-446b-b939-3d158d4d0cd0" (UID: "1247ecc9-9177-446b-b939-3d158d4d0cd0"). InnerVolumeSpecName "kube-api-access-4kj5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:02 crc kubenswrapper[4786]: I0127 13:36:02.970995 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1247ecc9-9177-446b-b939-3d158d4d0cd0-config-data" (OuterVolumeSpecName: "config-data") pod "1247ecc9-9177-446b-b939-3d158d4d0cd0" (UID: "1247ecc9-9177-446b-b939-3d158d4d0cd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.026775 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.048559 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1247ecc9-9177-446b-b939-3d158d4d0cd0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.048594 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kj5c\" (UniqueName: \"kubernetes.io/projected/1247ecc9-9177-446b-b939-3d158d4d0cd0-kube-api-access-4kj5c\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.149674 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbxkm\" (UniqueName: \"kubernetes.io/projected/62afa32f-2cf3-4581-8db3-0a4dd0d70545-kube-api-access-nbxkm\") pod \"62afa32f-2cf3-4581-8db3-0a4dd0d70545\" (UID: \"62afa32f-2cf3-4581-8db3-0a4dd0d70545\") " Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.149869 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62afa32f-2cf3-4581-8db3-0a4dd0d70545-config-data\") pod \"62afa32f-2cf3-4581-8db3-0a4dd0d70545\" (UID: \"62afa32f-2cf3-4581-8db3-0a4dd0d70545\") " Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.152691 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62afa32f-2cf3-4581-8db3-0a4dd0d70545-kube-api-access-nbxkm" (OuterVolumeSpecName: "kube-api-access-nbxkm") pod "62afa32f-2cf3-4581-8db3-0a4dd0d70545" (UID: "62afa32f-2cf3-4581-8db3-0a4dd0d70545"). InnerVolumeSpecName "kube-api-access-nbxkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.175002 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62afa32f-2cf3-4581-8db3-0a4dd0d70545-config-data" (OuterVolumeSpecName: "config-data") pod "62afa32f-2cf3-4581-8db3-0a4dd0d70545" (UID: "62afa32f-2cf3-4581-8db3-0a4dd0d70545"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.251583 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbxkm\" (UniqueName: \"kubernetes.io/projected/62afa32f-2cf3-4581-8db3-0a4dd0d70545-kube-api-access-nbxkm\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.251642 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62afa32f-2cf3-4581-8db3-0a4dd0d70545-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.481785 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01603e06-2096-4686-934f-59aade63c30d" path="/var/lib/kubelet/pods/01603e06-2096-4686-934f-59aade63c30d/volumes" Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.599694 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.600112 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-2" podUID="e2f7b9c2-401b-456a-93d3-cb99227f3a21" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://7250f0b735f220490a3146ab284737aeb35b0cac638909eb25e670eed4da724c" gracePeriod=30 Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.616071 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.616318 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-1" podUID="c946efaf-f2a6-48b6-a52c-3fd537fc15f4" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://11168003ef9fa6174893781aaa1cc3cd9b90e1f2f814223e83c0437507be924b" gracePeriod=30 Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.748279 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.748628 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="df824f05-4250-4af1-a208-e4d1233d297f" containerName="nova-kuttl-metadata-log" containerID="cri-o://e0f5521164e86e2fa4cdadfa7fdf83365fde0915db476192c8e042c10d76b01c" gracePeriod=30 Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.748699 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="df824f05-4250-4af1-a208-e4d1233d297f" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://63ca42c750fdbba36aa46d43b9d5a61bfcfd53c045751428e0dbb40e680d9498" gracePeriod=30 Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.784038 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.785380 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.786421 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"62afa32f-2cf3-4581-8db3-0a4dd0d70545","Type":"ContainerDied","Data":"35462ad291ad7c3fd6c94a3ae419240e1a385580703840571630beca19992bf4"} Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.786486 4786 scope.go:117] "RemoveContainer" containerID="9bcb9c71905a133eee1dff949e34ca22440258f1eab51d520f8aeceff6f562dd" Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.787601 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.798847 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="898828a7-a3cd-4455-b719-15d886f937a4" containerName="nova-kuttl-metadata-log" containerID="cri-o://240fd09a7d3dfde135c79d9f3995fc5fc4eca40aed0eed97d4a66c44eef937fc" gracePeriod=30 Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.799100 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="898828a7-a3cd-4455-b719-15d886f937a4" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://a07a605c50a482deb690ae43ecf1fd00bda5615e0bac9020ad7bc5ae5b8ab4b4" gracePeriod=30 Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.847666 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.857841 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.891523 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 27 13:36:03 crc kubenswrapper[4786]: I0127 13:36:03.896579 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 27 13:36:04 crc kubenswrapper[4786]: I0127 13:36:04.014085 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 27 13:36:04 crc kubenswrapper[4786]: I0127 13:36:04.014368 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" podUID="602c933e-40d5-42c8-8319-960566e74b61" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://863a49218bc0276520fd165e13d85605ed46d0e8629f4c1777483ebe0c6fc71b" gracePeriod=30 Jan 27 13:36:04 crc kubenswrapper[4786]: I0127 13:36:04.023208 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 27 13:36:04 crc kubenswrapper[4786]: I0127 13:36:04.023417 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" podUID="5c584017-f51e-4001-b5b6-dacefc2d7658" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://e79fa17f4f1d237ed33d0bb3a19d70bcdfacfbd92d16a884a04a025329b17e3b" gracePeriod=30 Jan 27 13:36:04 crc kubenswrapper[4786]: I0127 13:36:04.793880 4786 generic.go:334] "Generic (PLEG): container finished" podID="898828a7-a3cd-4455-b719-15d886f937a4" containerID="240fd09a7d3dfde135c79d9f3995fc5fc4eca40aed0eed97d4a66c44eef937fc" exitCode=143 Jan 27 13:36:04 crc kubenswrapper[4786]: I0127 13:36:04.793969 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"898828a7-a3cd-4455-b719-15d886f937a4","Type":"ContainerDied","Data":"240fd09a7d3dfde135c79d9f3995fc5fc4eca40aed0eed97d4a66c44eef937fc"} Jan 27 13:36:04 crc kubenswrapper[4786]: I0127 13:36:04.797408 4786 generic.go:334] "Generic (PLEG): container finished" podID="df824f05-4250-4af1-a208-e4d1233d297f" containerID="e0f5521164e86e2fa4cdadfa7fdf83365fde0915db476192c8e042c10d76b01c" exitCode=143 Jan 27 13:36:04 crc kubenswrapper[4786]: I0127 13:36:04.797453 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"df824f05-4250-4af1-a208-e4d1233d297f","Type":"ContainerDied","Data":"e0f5521164e86e2fa4cdadfa7fdf83365fde0915db476192c8e042c10d76b01c"} Jan 27 13:36:04 crc kubenswrapper[4786]: E0127 13:36:04.817181 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11168003ef9fa6174893781aaa1cc3cd9b90e1f2f814223e83c0437507be924b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:36:04 crc kubenswrapper[4786]: E0127 13:36:04.818649 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11168003ef9fa6174893781aaa1cc3cd9b90e1f2f814223e83c0437507be924b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:36:04 crc kubenswrapper[4786]: E0127 13:36:04.819804 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11168003ef9fa6174893781aaa1cc3cd9b90e1f2f814223e83c0437507be924b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:36:04 crc kubenswrapper[4786]: E0127 13:36:04.819834 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-1" podUID="c946efaf-f2a6-48b6-a52c-3fd537fc15f4" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:36:04 crc kubenswrapper[4786]: E0127 13:36:04.832568 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7250f0b735f220490a3146ab284737aeb35b0cac638909eb25e670eed4da724c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:36:04 crc kubenswrapper[4786]: E0127 13:36:04.833769 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7250f0b735f220490a3146ab284737aeb35b0cac638909eb25e670eed4da724c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:36:04 crc kubenswrapper[4786]: E0127 13:36:04.834923 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7250f0b735f220490a3146ab284737aeb35b0cac638909eb25e670eed4da724c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:36:04 crc kubenswrapper[4786]: E0127 13:36:04.834946 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-2" podUID="e2f7b9c2-401b-456a-93d3-cb99227f3a21" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:36:05 crc kubenswrapper[4786]: I0127 13:36:05.477523 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1247ecc9-9177-446b-b939-3d158d4d0cd0" path="/var/lib/kubelet/pods/1247ecc9-9177-446b-b939-3d158d4d0cd0/volumes" Jan 27 13:36:05 crc kubenswrapper[4786]: I0127 13:36:05.478192 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62afa32f-2cf3-4581-8db3-0a4dd0d70545" path="/var/lib/kubelet/pods/62afa32f-2cf3-4581-8db3-0a4dd0d70545/volumes" Jan 27 13:36:05 crc kubenswrapper[4786]: E0127 13:36:05.921524 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="863a49218bc0276520fd165e13d85605ed46d0e8629f4c1777483ebe0c6fc71b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:36:05 crc kubenswrapper[4786]: E0127 13:36:05.924065 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="863a49218bc0276520fd165e13d85605ed46d0e8629f4c1777483ebe0c6fc71b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:36:05 crc kubenswrapper[4786]: E0127 13:36:05.925430 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="863a49218bc0276520fd165e13d85605ed46d0e8629f4c1777483ebe0c6fc71b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:36:05 crc kubenswrapper[4786]: E0127 13:36:05.925467 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" podUID="602c933e-40d5-42c8-8319-960566e74b61" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:36:05 crc kubenswrapper[4786]: E0127 13:36:05.932095 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e79fa17f4f1d237ed33d0bb3a19d70bcdfacfbd92d16a884a04a025329b17e3b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:36:05 crc kubenswrapper[4786]: E0127 13:36:05.933559 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e79fa17f4f1d237ed33d0bb3a19d70bcdfacfbd92d16a884a04a025329b17e3b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:36:05 crc kubenswrapper[4786]: E0127 13:36:05.935168 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e79fa17f4f1d237ed33d0bb3a19d70bcdfacfbd92d16a884a04a025329b17e3b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:36:05 crc kubenswrapper[4786]: E0127 13:36:05.935233 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" podUID="5c584017-f51e-4001-b5b6-dacefc2d7658" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:36:06 crc kubenswrapper[4786]: I0127 13:36:06.942138 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="898828a7-a3cd-4455-b719-15d886f937a4" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.175:8775/\": read tcp 10.217.0.2:45710->10.217.0.175:8775: read: connection reset by peer" Jan 27 13:36:06 crc kubenswrapper[4786]: I0127 13:36:06.942494 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="898828a7-a3cd-4455-b719-15d886f937a4" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.175:8775/\": read tcp 10.217.0.2:45696->10.217.0.175:8775: read: connection reset by peer" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.018376 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="df824f05-4250-4af1-a208-e4d1233d297f" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.176:8775/\": read tcp 10.217.0.2:57772->10.217.0.176:8775: read: connection reset by peer" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.018426 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="df824f05-4250-4af1-a208-e4d1233d297f" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.176:8775/\": read tcp 10.217.0.2:57776->10.217.0.176:8775: read: connection reset by peer" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.473368 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.481396 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.620741 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df824f05-4250-4af1-a208-e4d1233d297f-config-data\") pod \"df824f05-4250-4af1-a208-e4d1233d297f\" (UID: \"df824f05-4250-4af1-a208-e4d1233d297f\") " Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.620931 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898828a7-a3cd-4455-b719-15d886f937a4-config-data\") pod \"898828a7-a3cd-4455-b719-15d886f937a4\" (UID: \"898828a7-a3cd-4455-b719-15d886f937a4\") " Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.620971 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtdsj\" (UniqueName: \"kubernetes.io/projected/df824f05-4250-4af1-a208-e4d1233d297f-kube-api-access-jtdsj\") pod \"df824f05-4250-4af1-a208-e4d1233d297f\" (UID: \"df824f05-4250-4af1-a208-e4d1233d297f\") " Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.621001 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwzmj\" (UniqueName: \"kubernetes.io/projected/898828a7-a3cd-4455-b719-15d886f937a4-kube-api-access-fwzmj\") pod \"898828a7-a3cd-4455-b719-15d886f937a4\" (UID: \"898828a7-a3cd-4455-b719-15d886f937a4\") " Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.621046 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/898828a7-a3cd-4455-b719-15d886f937a4-logs\") pod \"898828a7-a3cd-4455-b719-15d886f937a4\" (UID: \"898828a7-a3cd-4455-b719-15d886f937a4\") " Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.621105 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df824f05-4250-4af1-a208-e4d1233d297f-logs\") pod \"df824f05-4250-4af1-a208-e4d1233d297f\" (UID: \"df824f05-4250-4af1-a208-e4d1233d297f\") " Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.621690 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df824f05-4250-4af1-a208-e4d1233d297f-logs" (OuterVolumeSpecName: "logs") pod "df824f05-4250-4af1-a208-e4d1233d297f" (UID: "df824f05-4250-4af1-a208-e4d1233d297f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.621708 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/898828a7-a3cd-4455-b719-15d886f937a4-logs" (OuterVolumeSpecName: "logs") pod "898828a7-a3cd-4455-b719-15d886f937a4" (UID: "898828a7-a3cd-4455-b719-15d886f937a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.626343 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df824f05-4250-4af1-a208-e4d1233d297f-kube-api-access-jtdsj" (OuterVolumeSpecName: "kube-api-access-jtdsj") pod "df824f05-4250-4af1-a208-e4d1233d297f" (UID: "df824f05-4250-4af1-a208-e4d1233d297f"). InnerVolumeSpecName "kube-api-access-jtdsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.627361 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/898828a7-a3cd-4455-b719-15d886f937a4-kube-api-access-fwzmj" (OuterVolumeSpecName: "kube-api-access-fwzmj") pod "898828a7-a3cd-4455-b719-15d886f937a4" (UID: "898828a7-a3cd-4455-b719-15d886f937a4"). InnerVolumeSpecName "kube-api-access-fwzmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.657633 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/898828a7-a3cd-4455-b719-15d886f937a4-config-data" (OuterVolumeSpecName: "config-data") pod "898828a7-a3cd-4455-b719-15d886f937a4" (UID: "898828a7-a3cd-4455-b719-15d886f937a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.664151 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df824f05-4250-4af1-a208-e4d1233d297f-config-data" (OuterVolumeSpecName: "config-data") pod "df824f05-4250-4af1-a208-e4d1233d297f" (UID: "df824f05-4250-4af1-a208-e4d1233d297f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.723173 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/898828a7-a3cd-4455-b719-15d886f937a4-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.723208 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df824f05-4250-4af1-a208-e4d1233d297f-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.723218 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df824f05-4250-4af1-a208-e4d1233d297f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.723226 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898828a7-a3cd-4455-b719-15d886f937a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.723236 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtdsj\" (UniqueName: \"kubernetes.io/projected/df824f05-4250-4af1-a208-e4d1233d297f-kube-api-access-jtdsj\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.723247 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwzmj\" (UniqueName: \"kubernetes.io/projected/898828a7-a3cd-4455-b719-15d886f937a4-kube-api-access-fwzmj\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.836571 4786 generic.go:334] "Generic (PLEG): container finished" podID="e2f7b9c2-401b-456a-93d3-cb99227f3a21" containerID="7250f0b735f220490a3146ab284737aeb35b0cac638909eb25e670eed4da724c" exitCode=0 Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.836657 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"e2f7b9c2-401b-456a-93d3-cb99227f3a21","Type":"ContainerDied","Data":"7250f0b735f220490a3146ab284737aeb35b0cac638909eb25e670eed4da724c"} Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.838828 4786 generic.go:334] "Generic (PLEG): container finished" podID="898828a7-a3cd-4455-b719-15d886f937a4" containerID="a07a605c50a482deb690ae43ecf1fd00bda5615e0bac9020ad7bc5ae5b8ab4b4" exitCode=0 Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.838932 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.838937 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"898828a7-a3cd-4455-b719-15d886f937a4","Type":"ContainerDied","Data":"a07a605c50a482deb690ae43ecf1fd00bda5615e0bac9020ad7bc5ae5b8ab4b4"} Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.838984 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"898828a7-a3cd-4455-b719-15d886f937a4","Type":"ContainerDied","Data":"490eb44dbf02a444c8a4a8b67452283752da4c9765e63b719e64f172d8945134"} Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.839003 4786 scope.go:117] "RemoveContainer" containerID="a07a605c50a482deb690ae43ecf1fd00bda5615e0bac9020ad7bc5ae5b8ab4b4" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.841389 4786 generic.go:334] "Generic (PLEG): container finished" podID="c946efaf-f2a6-48b6-a52c-3fd537fc15f4" containerID="11168003ef9fa6174893781aaa1cc3cd9b90e1f2f814223e83c0437507be924b" exitCode=0 Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.841466 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"c946efaf-f2a6-48b6-a52c-3fd537fc15f4","Type":"ContainerDied","Data":"11168003ef9fa6174893781aaa1cc3cd9b90e1f2f814223e83c0437507be924b"} Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.844483 4786 generic.go:334] "Generic (PLEG): container finished" podID="df824f05-4250-4af1-a208-e4d1233d297f" containerID="63ca42c750fdbba36aa46d43b9d5a61bfcfd53c045751428e0dbb40e680d9498" exitCode=0 Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.844525 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"df824f05-4250-4af1-a208-e4d1233d297f","Type":"ContainerDied","Data":"63ca42c750fdbba36aa46d43b9d5a61bfcfd53c045751428e0dbb40e680d9498"} Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.844553 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"df824f05-4250-4af1-a208-e4d1233d297f","Type":"ContainerDied","Data":"466371f5a457b5f68ddb23ac2988b0349d617a749f80cb4bb288f04ba80ab37d"} Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.844639 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.886527 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.897032 4786 scope.go:117] "RemoveContainer" containerID="240fd09a7d3dfde135c79d9f3995fc5fc4eca40aed0eed97d4a66c44eef937fc" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.900838 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.931413 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.937571 4786 scope.go:117] "RemoveContainer" containerID="a07a605c50a482deb690ae43ecf1fd00bda5615e0bac9020ad7bc5ae5b8ab4b4" Jan 27 13:36:07 crc kubenswrapper[4786]: E0127 13:36:07.938161 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07a605c50a482deb690ae43ecf1fd00bda5615e0bac9020ad7bc5ae5b8ab4b4\": container with ID starting with a07a605c50a482deb690ae43ecf1fd00bda5615e0bac9020ad7bc5ae5b8ab4b4 not found: ID does not exist" containerID="a07a605c50a482deb690ae43ecf1fd00bda5615e0bac9020ad7bc5ae5b8ab4b4" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.938208 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07a605c50a482deb690ae43ecf1fd00bda5615e0bac9020ad7bc5ae5b8ab4b4"} err="failed to get container status \"a07a605c50a482deb690ae43ecf1fd00bda5615e0bac9020ad7bc5ae5b8ab4b4\": rpc error: code = NotFound desc = could not find container \"a07a605c50a482deb690ae43ecf1fd00bda5615e0bac9020ad7bc5ae5b8ab4b4\": container with ID starting with a07a605c50a482deb690ae43ecf1fd00bda5615e0bac9020ad7bc5ae5b8ab4b4 not found: ID does not exist" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.938234 4786 scope.go:117] "RemoveContainer" containerID="240fd09a7d3dfde135c79d9f3995fc5fc4eca40aed0eed97d4a66c44eef937fc" Jan 27 13:36:07 crc kubenswrapper[4786]: E0127 13:36:07.938557 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"240fd09a7d3dfde135c79d9f3995fc5fc4eca40aed0eed97d4a66c44eef937fc\": container with ID starting with 240fd09a7d3dfde135c79d9f3995fc5fc4eca40aed0eed97d4a66c44eef937fc not found: ID does not exist" containerID="240fd09a7d3dfde135c79d9f3995fc5fc4eca40aed0eed97d4a66c44eef937fc" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.938579 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"240fd09a7d3dfde135c79d9f3995fc5fc4eca40aed0eed97d4a66c44eef937fc"} err="failed to get container status \"240fd09a7d3dfde135c79d9f3995fc5fc4eca40aed0eed97d4a66c44eef937fc\": rpc error: code = NotFound desc = could not find container \"240fd09a7d3dfde135c79d9f3995fc5fc4eca40aed0eed97d4a66c44eef937fc\": container with ID starting with 240fd09a7d3dfde135c79d9f3995fc5fc4eca40aed0eed97d4a66c44eef937fc not found: ID does not exist" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.938590 4786 scope.go:117] "RemoveContainer" containerID="63ca42c750fdbba36aa46d43b9d5a61bfcfd53c045751428e0dbb40e680d9498" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.939261 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.959044 4786 scope.go:117] "RemoveContainer" containerID="e0f5521164e86e2fa4cdadfa7fdf83365fde0915db476192c8e042c10d76b01c" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.992796 4786 scope.go:117] "RemoveContainer" containerID="63ca42c750fdbba36aa46d43b9d5a61bfcfd53c045751428e0dbb40e680d9498" Jan 27 13:36:07 crc kubenswrapper[4786]: E0127 13:36:07.993247 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63ca42c750fdbba36aa46d43b9d5a61bfcfd53c045751428e0dbb40e680d9498\": container with ID starting with 63ca42c750fdbba36aa46d43b9d5a61bfcfd53c045751428e0dbb40e680d9498 not found: ID does not exist" containerID="63ca42c750fdbba36aa46d43b9d5a61bfcfd53c045751428e0dbb40e680d9498" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.993276 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63ca42c750fdbba36aa46d43b9d5a61bfcfd53c045751428e0dbb40e680d9498"} err="failed to get container status \"63ca42c750fdbba36aa46d43b9d5a61bfcfd53c045751428e0dbb40e680d9498\": rpc error: code = NotFound desc = could not find container \"63ca42c750fdbba36aa46d43b9d5a61bfcfd53c045751428e0dbb40e680d9498\": container with ID starting with 63ca42c750fdbba36aa46d43b9d5a61bfcfd53c045751428e0dbb40e680d9498 not found: ID does not exist" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.993305 4786 scope.go:117] "RemoveContainer" containerID="e0f5521164e86e2fa4cdadfa7fdf83365fde0915db476192c8e042c10d76b01c" Jan 27 13:36:07 crc kubenswrapper[4786]: E0127 13:36:07.993675 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0f5521164e86e2fa4cdadfa7fdf83365fde0915db476192c8e042c10d76b01c\": container with ID starting with e0f5521164e86e2fa4cdadfa7fdf83365fde0915db476192c8e042c10d76b01c not found: ID does not exist" containerID="e0f5521164e86e2fa4cdadfa7fdf83365fde0915db476192c8e042c10d76b01c" Jan 27 13:36:07 crc kubenswrapper[4786]: I0127 13:36:07.993693 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0f5521164e86e2fa4cdadfa7fdf83365fde0915db476192c8e042c10d76b01c"} err="failed to get container status \"e0f5521164e86e2fa4cdadfa7fdf83365fde0915db476192c8e042c10d76b01c\": rpc error: code = NotFound desc = could not find container \"e0f5521164e86e2fa4cdadfa7fdf83365fde0915db476192c8e042c10d76b01c\": container with ID starting with e0f5521164e86e2fa4cdadfa7fdf83365fde0915db476192c8e042c10d76b01c not found: ID does not exist" Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.027524 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.136277 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f7b9c2-401b-456a-93d3-cb99227f3a21-config-data\") pod \"e2f7b9c2-401b-456a-93d3-cb99227f3a21\" (UID: \"e2f7b9c2-401b-456a-93d3-cb99227f3a21\") " Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.136370 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-544kh\" (UniqueName: \"kubernetes.io/projected/e2f7b9c2-401b-456a-93d3-cb99227f3a21-kube-api-access-544kh\") pod \"e2f7b9c2-401b-456a-93d3-cb99227f3a21\" (UID: \"e2f7b9c2-401b-456a-93d3-cb99227f3a21\") " Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.138281 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.147899 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2f7b9c2-401b-456a-93d3-cb99227f3a21-kube-api-access-544kh" (OuterVolumeSpecName: "kube-api-access-544kh") pod "e2f7b9c2-401b-456a-93d3-cb99227f3a21" (UID: "e2f7b9c2-401b-456a-93d3-cb99227f3a21"). InnerVolumeSpecName "kube-api-access-544kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.160930 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f7b9c2-401b-456a-93d3-cb99227f3a21-config-data" (OuterVolumeSpecName: "config-data") pod "e2f7b9c2-401b-456a-93d3-cb99227f3a21" (UID: "e2f7b9c2-401b-456a-93d3-cb99227f3a21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.238158 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh7sh\" (UniqueName: \"kubernetes.io/projected/c946efaf-f2a6-48b6-a52c-3fd537fc15f4-kube-api-access-wh7sh\") pod \"c946efaf-f2a6-48b6-a52c-3fd537fc15f4\" (UID: \"c946efaf-f2a6-48b6-a52c-3fd537fc15f4\") " Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.238371 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c946efaf-f2a6-48b6-a52c-3fd537fc15f4-config-data\") pod \"c946efaf-f2a6-48b6-a52c-3fd537fc15f4\" (UID: \"c946efaf-f2a6-48b6-a52c-3fd537fc15f4\") " Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.238788 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f7b9c2-401b-456a-93d3-cb99227f3a21-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.238812 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-544kh\" (UniqueName: \"kubernetes.io/projected/e2f7b9c2-401b-456a-93d3-cb99227f3a21-kube-api-access-544kh\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.245075 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c946efaf-f2a6-48b6-a52c-3fd537fc15f4-kube-api-access-wh7sh" (OuterVolumeSpecName: "kube-api-access-wh7sh") pod "c946efaf-f2a6-48b6-a52c-3fd537fc15f4" (UID: "c946efaf-f2a6-48b6-a52c-3fd537fc15f4"). InnerVolumeSpecName "kube-api-access-wh7sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.255763 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c946efaf-f2a6-48b6-a52c-3fd537fc15f4-config-data" (OuterVolumeSpecName: "config-data") pod "c946efaf-f2a6-48b6-a52c-3fd537fc15f4" (UID: "c946efaf-f2a6-48b6-a52c-3fd537fc15f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.340498 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c946efaf-f2a6-48b6-a52c-3fd537fc15f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.340547 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh7sh\" (UniqueName: \"kubernetes.io/projected/c946efaf-f2a6-48b6-a52c-3fd537fc15f4-kube-api-access-wh7sh\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.877883 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"c946efaf-f2a6-48b6-a52c-3fd537fc15f4","Type":"ContainerDied","Data":"68ee64ade851626fd376fe1fe22cc14542272ff2420bc667acfb29253ee61adb"} Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.878202 4786 scope.go:117] "RemoveContainer" containerID="11168003ef9fa6174893781aaa1cc3cd9b90e1f2f814223e83c0437507be924b" Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.878306 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.931530 4786 generic.go:334] "Generic (PLEG): container finished" podID="5c584017-f51e-4001-b5b6-dacefc2d7658" containerID="e79fa17f4f1d237ed33d0bb3a19d70bcdfacfbd92d16a884a04a025329b17e3b" exitCode=0 Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.931656 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"5c584017-f51e-4001-b5b6-dacefc2d7658","Type":"ContainerDied","Data":"e79fa17f4f1d237ed33d0bb3a19d70bcdfacfbd92d16a884a04a025329b17e3b"} Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.942567 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"e2f7b9c2-401b-456a-93d3-cb99227f3a21","Type":"ContainerDied","Data":"d218410eac23febe72ef7c54b42f65b629adbf8655a50c20c02656b521660329"} Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.942806 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.974332 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.977303 4786 scope.go:117] "RemoveContainer" containerID="7250f0b735f220490a3146ab284737aeb35b0cac638909eb25e670eed4da724c" Jan 27 13:36:08 crc kubenswrapper[4786]: I0127 13:36:08.997666 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.006374 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.012698 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.210444 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.253414 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h82mj\" (UniqueName: \"kubernetes.io/projected/5c584017-f51e-4001-b5b6-dacefc2d7658-kube-api-access-h82mj\") pod \"5c584017-f51e-4001-b5b6-dacefc2d7658\" (UID: \"5c584017-f51e-4001-b5b6-dacefc2d7658\") " Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.253542 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c584017-f51e-4001-b5b6-dacefc2d7658-config-data\") pod \"5c584017-f51e-4001-b5b6-dacefc2d7658\" (UID: \"5c584017-f51e-4001-b5b6-dacefc2d7658\") " Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.258881 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c584017-f51e-4001-b5b6-dacefc2d7658-kube-api-access-h82mj" (OuterVolumeSpecName: "kube-api-access-h82mj") pod "5c584017-f51e-4001-b5b6-dacefc2d7658" (UID: "5c584017-f51e-4001-b5b6-dacefc2d7658"). InnerVolumeSpecName "kube-api-access-h82mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.279794 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c584017-f51e-4001-b5b6-dacefc2d7658-config-data" (OuterVolumeSpecName: "config-data") pod "5c584017-f51e-4001-b5b6-dacefc2d7658" (UID: "5c584017-f51e-4001-b5b6-dacefc2d7658"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.355480 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c584017-f51e-4001-b5b6-dacefc2d7658-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.355515 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h82mj\" (UniqueName: \"kubernetes.io/projected/5c584017-f51e-4001-b5b6-dacefc2d7658-kube-api-access-h82mj\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.477161 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="898828a7-a3cd-4455-b719-15d886f937a4" path="/var/lib/kubelet/pods/898828a7-a3cd-4455-b719-15d886f937a4/volumes" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.477849 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c946efaf-f2a6-48b6-a52c-3fd537fc15f4" path="/var/lib/kubelet/pods/c946efaf-f2a6-48b6-a52c-3fd537fc15f4/volumes" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.478581 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df824f05-4250-4af1-a208-e4d1233d297f" path="/var/lib/kubelet/pods/df824f05-4250-4af1-a208-e4d1233d297f/volumes" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.480965 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2f7b9c2-401b-456a-93d3-cb99227f3a21" path="/var/lib/kubelet/pods/e2f7b9c2-401b-456a-93d3-cb99227f3a21/volumes" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.609824 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.680154 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f28z8\" (UniqueName: \"kubernetes.io/projected/602c933e-40d5-42c8-8319-960566e74b61-kube-api-access-f28z8\") pod \"602c933e-40d5-42c8-8319-960566e74b61\" (UID: \"602c933e-40d5-42c8-8319-960566e74b61\") " Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.680460 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602c933e-40d5-42c8-8319-960566e74b61-config-data\") pod \"602c933e-40d5-42c8-8319-960566e74b61\" (UID: \"602c933e-40d5-42c8-8319-960566e74b61\") " Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.685194 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602c933e-40d5-42c8-8319-960566e74b61-kube-api-access-f28z8" (OuterVolumeSpecName: "kube-api-access-f28z8") pod "602c933e-40d5-42c8-8319-960566e74b61" (UID: "602c933e-40d5-42c8-8319-960566e74b61"). InnerVolumeSpecName "kube-api-access-f28z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.706173 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/602c933e-40d5-42c8-8319-960566e74b61-config-data" (OuterVolumeSpecName: "config-data") pod "602c933e-40d5-42c8-8319-960566e74b61" (UID: "602c933e-40d5-42c8-8319-960566e74b61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.782930 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f28z8\" (UniqueName: \"kubernetes.io/projected/602c933e-40d5-42c8-8319-960566e74b61-kube-api-access-f28z8\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.782988 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602c933e-40d5-42c8-8319-960566e74b61-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.953418 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"5c584017-f51e-4001-b5b6-dacefc2d7658","Type":"ContainerDied","Data":"3b3bf9fb85f3c4b77081efc160a801ab41889ca3d9b8bf8ea5998ccfaa7e5a4c"} Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.953481 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.953489 4786 scope.go:117] "RemoveContainer" containerID="e79fa17f4f1d237ed33d0bb3a19d70bcdfacfbd92d16a884a04a025329b17e3b" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.958772 4786 generic.go:334] "Generic (PLEG): container finished" podID="602c933e-40d5-42c8-8319-960566e74b61" containerID="863a49218bc0276520fd165e13d85605ed46d0e8629f4c1777483ebe0c6fc71b" exitCode=0 Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.958852 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.959299 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"602c933e-40d5-42c8-8319-960566e74b61","Type":"ContainerDied","Data":"863a49218bc0276520fd165e13d85605ed46d0e8629f4c1777483ebe0c6fc71b"} Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.959357 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"602c933e-40d5-42c8-8319-960566e74b61","Type":"ContainerDied","Data":"015ebf81fe6b06c23a909f95f6f6a7364b70de86eaff437fe3449e53b6b4ef33"} Jan 27 13:36:09 crc kubenswrapper[4786]: I0127 13:36:09.980160 4786 scope.go:117] "RemoveContainer" containerID="863a49218bc0276520fd165e13d85605ed46d0e8629f4c1777483ebe0c6fc71b" Jan 27 13:36:10 crc kubenswrapper[4786]: I0127 13:36:10.000267 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 27 13:36:10 crc kubenswrapper[4786]: I0127 13:36:10.010167 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 27 13:36:10 crc kubenswrapper[4786]: I0127 13:36:10.017825 4786 scope.go:117] "RemoveContainer" containerID="863a49218bc0276520fd165e13d85605ed46d0e8629f4c1777483ebe0c6fc71b" Jan 27 13:36:10 crc kubenswrapper[4786]: E0127 13:36:10.018228 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"863a49218bc0276520fd165e13d85605ed46d0e8629f4c1777483ebe0c6fc71b\": container with ID starting with 863a49218bc0276520fd165e13d85605ed46d0e8629f4c1777483ebe0c6fc71b not found: ID does not exist" containerID="863a49218bc0276520fd165e13d85605ed46d0e8629f4c1777483ebe0c6fc71b" Jan 27 13:36:10 crc kubenswrapper[4786]: I0127 13:36:10.018268 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"863a49218bc0276520fd165e13d85605ed46d0e8629f4c1777483ebe0c6fc71b"} err="failed to get container status \"863a49218bc0276520fd165e13d85605ed46d0e8629f4c1777483ebe0c6fc71b\": rpc error: code = NotFound desc = could not find container \"863a49218bc0276520fd165e13d85605ed46d0e8629f4c1777483ebe0c6fc71b\": container with ID starting with 863a49218bc0276520fd165e13d85605ed46d0e8629f4c1777483ebe0c6fc71b not found: ID does not exist" Jan 27 13:36:10 crc kubenswrapper[4786]: I0127 13:36:10.021338 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 27 13:36:10 crc kubenswrapper[4786]: I0127 13:36:10.027688 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 27 13:36:11 crc kubenswrapper[4786]: I0127 13:36:11.474507 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c584017-f51e-4001-b5b6-dacefc2d7658" path="/var/lib/kubelet/pods/5c584017-f51e-4001-b5b6-dacefc2d7658/volumes" Jan 27 13:36:11 crc kubenswrapper[4786]: I0127 13:36:11.475247 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602c933e-40d5-42c8-8319-960566e74b61" path="/var/lib/kubelet/pods/602c933e-40d5-42c8-8319-960566e74b61/volumes" Jan 27 13:36:17 crc kubenswrapper[4786]: I0127 13:36:17.471359 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:36:17 crc kubenswrapper[4786]: E0127 13:36:17.472356 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:36:20 crc kubenswrapper[4786]: I0127 13:36:20.665574 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:36:20 crc kubenswrapper[4786]: I0127 13:36:20.666391 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="51c7d69a-7abb-4abe-8954-874bcd3c8f33" containerName="nova-kuttl-api-log" containerID="cri-o://21fabf85db70b1388eab53d6a7c98be1c30e2e42b8fa9b0027542aba48182fbd" gracePeriod=30 Jan 27 13:36:20 crc kubenswrapper[4786]: I0127 13:36:20.666511 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="51c7d69a-7abb-4abe-8954-874bcd3c8f33" containerName="nova-kuttl-api-api" containerID="cri-o://7becddc47e821d4547b2bcaf14514e7a9b3cf9c84f09e3a37b5ebe5b5106e95b" gracePeriod=30 Jan 27 13:36:20 crc kubenswrapper[4786]: I0127 13:36:20.975736 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:36:20 crc kubenswrapper[4786]: I0127 13:36:20.975985 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="210ccb3b-41d2-4166-8326-0d44f933bd23" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://e24809012c4038884894028afc330aa68ac0ab179f0a3ce520cd8708056bb87a" gracePeriod=30 Jan 27 13:36:21 crc kubenswrapper[4786]: I0127 13:36:21.050669 4786 generic.go:334] "Generic (PLEG): container finished" podID="51c7d69a-7abb-4abe-8954-874bcd3c8f33" containerID="21fabf85db70b1388eab53d6a7c98be1c30e2e42b8fa9b0027542aba48182fbd" exitCode=143 Jan 27 13:36:21 crc kubenswrapper[4786]: I0127 13:36:21.050718 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"51c7d69a-7abb-4abe-8954-874bcd3c8f33","Type":"ContainerDied","Data":"21fabf85db70b1388eab53d6a7c98be1c30e2e42b8fa9b0027542aba48182fbd"} Jan 27 13:36:21 crc kubenswrapper[4786]: E0127 13:36:21.121002 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e24809012c4038884894028afc330aa68ac0ab179f0a3ce520cd8708056bb87a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:36:21 crc kubenswrapper[4786]: E0127 13:36:21.122620 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e24809012c4038884894028afc330aa68ac0ab179f0a3ce520cd8708056bb87a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:36:21 crc kubenswrapper[4786]: E0127 13:36:21.123840 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e24809012c4038884894028afc330aa68ac0ab179f0a3ce520cd8708056bb87a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:36:21 crc kubenswrapper[4786]: E0127 13:36:21.123874 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="210ccb3b-41d2-4166-8326-0d44f933bd23" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.086077 4786 generic.go:334] "Generic (PLEG): container finished" podID="51c7d69a-7abb-4abe-8954-874bcd3c8f33" containerID="7becddc47e821d4547b2bcaf14514e7a9b3cf9c84f09e3a37b5ebe5b5106e95b" exitCode=0 Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.086165 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"51c7d69a-7abb-4abe-8954-874bcd3c8f33","Type":"ContainerDied","Data":"7becddc47e821d4547b2bcaf14514e7a9b3cf9c84f09e3a37b5ebe5b5106e95b"} Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.172495 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.221125 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9b97\" (UniqueName: \"kubernetes.io/projected/51c7d69a-7abb-4abe-8954-874bcd3c8f33-kube-api-access-z9b97\") pod \"51c7d69a-7abb-4abe-8954-874bcd3c8f33\" (UID: \"51c7d69a-7abb-4abe-8954-874bcd3c8f33\") " Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.221213 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c7d69a-7abb-4abe-8954-874bcd3c8f33-config-data\") pod \"51c7d69a-7abb-4abe-8954-874bcd3c8f33\" (UID: \"51c7d69a-7abb-4abe-8954-874bcd3c8f33\") " Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.221250 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c7d69a-7abb-4abe-8954-874bcd3c8f33-logs\") pod \"51c7d69a-7abb-4abe-8954-874bcd3c8f33\" (UID: \"51c7d69a-7abb-4abe-8954-874bcd3c8f33\") " Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.221804 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51c7d69a-7abb-4abe-8954-874bcd3c8f33-logs" (OuterVolumeSpecName: "logs") pod "51c7d69a-7abb-4abe-8954-874bcd3c8f33" (UID: "51c7d69a-7abb-4abe-8954-874bcd3c8f33"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.229207 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c7d69a-7abb-4abe-8954-874bcd3c8f33-kube-api-access-z9b97" (OuterVolumeSpecName: "kube-api-access-z9b97") pod "51c7d69a-7abb-4abe-8954-874bcd3c8f33" (UID: "51c7d69a-7abb-4abe-8954-874bcd3c8f33"). InnerVolumeSpecName "kube-api-access-z9b97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.250781 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c7d69a-7abb-4abe-8954-874bcd3c8f33-config-data" (OuterVolumeSpecName: "config-data") pod "51c7d69a-7abb-4abe-8954-874bcd3c8f33" (UID: "51c7d69a-7abb-4abe-8954-874bcd3c8f33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.322539 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9b97\" (UniqueName: \"kubernetes.io/projected/51c7d69a-7abb-4abe-8954-874bcd3c8f33-kube-api-access-z9b97\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.322584 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c7d69a-7abb-4abe-8954-874bcd3c8f33-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.322597 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51c7d69a-7abb-4abe-8954-874bcd3c8f33-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.830514 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.933926 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqzpz\" (UniqueName: \"kubernetes.io/projected/210ccb3b-41d2-4166-8326-0d44f933bd23-kube-api-access-rqzpz\") pod \"210ccb3b-41d2-4166-8326-0d44f933bd23\" (UID: \"210ccb3b-41d2-4166-8326-0d44f933bd23\") " Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.934345 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/210ccb3b-41d2-4166-8326-0d44f933bd23-config-data\") pod \"210ccb3b-41d2-4166-8326-0d44f933bd23\" (UID: \"210ccb3b-41d2-4166-8326-0d44f933bd23\") " Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.938636 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210ccb3b-41d2-4166-8326-0d44f933bd23-kube-api-access-rqzpz" (OuterVolumeSpecName: "kube-api-access-rqzpz") pod "210ccb3b-41d2-4166-8326-0d44f933bd23" (UID: "210ccb3b-41d2-4166-8326-0d44f933bd23"). InnerVolumeSpecName "kube-api-access-rqzpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:24 crc kubenswrapper[4786]: I0127 13:36:24.960168 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210ccb3b-41d2-4166-8326-0d44f933bd23-config-data" (OuterVolumeSpecName: "config-data") pod "210ccb3b-41d2-4166-8326-0d44f933bd23" (UID: "210ccb3b-41d2-4166-8326-0d44f933bd23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.041564 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqzpz\" (UniqueName: \"kubernetes.io/projected/210ccb3b-41d2-4166-8326-0d44f933bd23-kube-api-access-rqzpz\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.041614 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/210ccb3b-41d2-4166-8326-0d44f933bd23-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.095550 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.095543 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"51c7d69a-7abb-4abe-8954-874bcd3c8f33","Type":"ContainerDied","Data":"1166250b16a40ab3ef7919b3daf76fae960464713a73ed89198d2f232f22bf68"} Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.095717 4786 scope.go:117] "RemoveContainer" containerID="7becddc47e821d4547b2bcaf14514e7a9b3cf9c84f09e3a37b5ebe5b5106e95b" Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.097060 4786 generic.go:334] "Generic (PLEG): container finished" podID="210ccb3b-41d2-4166-8326-0d44f933bd23" containerID="e24809012c4038884894028afc330aa68ac0ab179f0a3ce520cd8708056bb87a" exitCode=0 Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.097101 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"210ccb3b-41d2-4166-8326-0d44f933bd23","Type":"ContainerDied","Data":"e24809012c4038884894028afc330aa68ac0ab179f0a3ce520cd8708056bb87a"} Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.097150 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"210ccb3b-41d2-4166-8326-0d44f933bd23","Type":"ContainerDied","Data":"0b405b8c78e5a37556e1bc76a5d208abe7693398e45340c4e33ebbddcbd7b15d"} Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.097190 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.118073 4786 scope.go:117] "RemoveContainer" containerID="21fabf85db70b1388eab53d6a7c98be1c30e2e42b8fa9b0027542aba48182fbd" Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.144315 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.153666 4786 scope.go:117] "RemoveContainer" containerID="e24809012c4038884894028afc330aa68ac0ab179f0a3ce520cd8708056bb87a" Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.163988 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.176162 4786 scope.go:117] "RemoveContainer" containerID="e24809012c4038884894028afc330aa68ac0ab179f0a3ce520cd8708056bb87a" Jan 27 13:36:25 crc kubenswrapper[4786]: E0127 13:36:25.176621 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e24809012c4038884894028afc330aa68ac0ab179f0a3ce520cd8708056bb87a\": container with ID starting with e24809012c4038884894028afc330aa68ac0ab179f0a3ce520cd8708056bb87a not found: ID does not exist" containerID="e24809012c4038884894028afc330aa68ac0ab179f0a3ce520cd8708056bb87a" Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.176660 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e24809012c4038884894028afc330aa68ac0ab179f0a3ce520cd8708056bb87a"} err="failed to get container status \"e24809012c4038884894028afc330aa68ac0ab179f0a3ce520cd8708056bb87a\": rpc error: code = NotFound desc = could not find container \"e24809012c4038884894028afc330aa68ac0ab179f0a3ce520cd8708056bb87a\": container with ID starting with e24809012c4038884894028afc330aa68ac0ab179f0a3ce520cd8708056bb87a not found: ID does not exist" Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.179800 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.194747 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.328646 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.328878 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="f1abe3dd-6930-44ad-80c7-de3757265d02" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://79f500f914e78ea163f0fcaea04e0c1859fe73ef1f570f8ef0fc67b224e87242" gracePeriod=30 Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.400177 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.400443 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="2ea95db7-d568-46c3-8874-98748692c4fe" containerName="nova-kuttl-metadata-log" containerID="cri-o://104a660849b90da9aa6f1ac9393aefd431f96d0207343c6ddba13d10403447c6" gracePeriod=30 Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.400517 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="2ea95db7-d568-46c3-8874-98748692c4fe" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://ba56aca22a2e60ed77549cca35bf7bd0cea1e10bb1b49792edabcb6881995ba3" gracePeriod=30 Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.474279 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210ccb3b-41d2-4166-8326-0d44f933bd23" path="/var/lib/kubelet/pods/210ccb3b-41d2-4166-8326-0d44f933bd23/volumes" Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.474852 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c7d69a-7abb-4abe-8954-874bcd3c8f33" path="/var/lib/kubelet/pods/51c7d69a-7abb-4abe-8954-874bcd3c8f33/volumes" Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.652460 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:36:25 crc kubenswrapper[4786]: I0127 13:36:25.652706 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="d62b7637-2d50-41c6-8aaf-5f741b49e241" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://a88465c939ad7b809b684a203530f4aa809458dffe9b0256f104de7cecbccb34" gracePeriod=30 Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.108729 4786 generic.go:334] "Generic (PLEG): container finished" podID="f1abe3dd-6930-44ad-80c7-de3757265d02" containerID="79f500f914e78ea163f0fcaea04e0c1859fe73ef1f570f8ef0fc67b224e87242" exitCode=0 Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.108834 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"f1abe3dd-6930-44ad-80c7-de3757265d02","Type":"ContainerDied","Data":"79f500f914e78ea163f0fcaea04e0c1859fe73ef1f570f8ef0fc67b224e87242"} Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.110586 4786 generic.go:334] "Generic (PLEG): container finished" podID="2ea95db7-d568-46c3-8874-98748692c4fe" containerID="104a660849b90da9aa6f1ac9393aefd431f96d0207343c6ddba13d10403447c6" exitCode=143 Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.110636 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2ea95db7-d568-46c3-8874-98748692c4fe","Type":"ContainerDied","Data":"104a660849b90da9aa6f1ac9393aefd431f96d0207343c6ddba13d10403447c6"} Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.304142 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.362536 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bdkp\" (UniqueName: \"kubernetes.io/projected/f1abe3dd-6930-44ad-80c7-de3757265d02-kube-api-access-8bdkp\") pod \"f1abe3dd-6930-44ad-80c7-de3757265d02\" (UID: \"f1abe3dd-6930-44ad-80c7-de3757265d02\") " Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.362717 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1abe3dd-6930-44ad-80c7-de3757265d02-config-data\") pod \"f1abe3dd-6930-44ad-80c7-de3757265d02\" (UID: \"f1abe3dd-6930-44ad-80c7-de3757265d02\") " Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.367310 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1abe3dd-6930-44ad-80c7-de3757265d02-kube-api-access-8bdkp" (OuterVolumeSpecName: "kube-api-access-8bdkp") pod "f1abe3dd-6930-44ad-80c7-de3757265d02" (UID: "f1abe3dd-6930-44ad-80c7-de3757265d02"). InnerVolumeSpecName "kube-api-access-8bdkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.382827 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1abe3dd-6930-44ad-80c7-de3757265d02-config-data" (OuterVolumeSpecName: "config-data") pod "f1abe3dd-6930-44ad-80c7-de3757265d02" (UID: "f1abe3dd-6930-44ad-80c7-de3757265d02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.468089 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1abe3dd-6930-44ad-80c7-de3757265d02-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.468129 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bdkp\" (UniqueName: \"kubernetes.io/projected/f1abe3dd-6930-44ad-80c7-de3757265d02-kube-api-access-8bdkp\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.817183 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z"] Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.824440 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx"] Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.833238 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2xxqx"] Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.845200 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-mnd7z"] Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.879986 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell032e8-account-delete-r44mb"] Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.880746 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c7d69a-7abb-4abe-8954-874bcd3c8f33" containerName="nova-kuttl-api-api" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.880768 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c7d69a-7abb-4abe-8954-874bcd3c8f33" containerName="nova-kuttl-api-api" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.880793 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51df781b-8436-4545-8e9a-c9770e35814b" containerName="nova-kuttl-api-log" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.880804 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="51df781b-8436-4545-8e9a-c9770e35814b" containerName="nova-kuttl-api-log" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.880833 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1247ecc9-9177-446b-b939-3d158d4d0cd0" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.880850 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="1247ecc9-9177-446b-b939-3d158d4d0cd0" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.880883 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01603e06-2096-4686-934f-59aade63c30d" containerName="nova-kuttl-api-log" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.880893 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="01603e06-2096-4686-934f-59aade63c30d" containerName="nova-kuttl-api-log" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.884707 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df824f05-4250-4af1-a208-e4d1233d297f" containerName="nova-kuttl-metadata-metadata" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.884742 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="df824f05-4250-4af1-a208-e4d1233d297f" containerName="nova-kuttl-metadata-metadata" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.884757 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62afa32f-2cf3-4581-8db3-0a4dd0d70545" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.884766 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="62afa32f-2cf3-4581-8db3-0a4dd0d70545" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.884803 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f7b9c2-401b-456a-93d3-cb99227f3a21" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.884811 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f7b9c2-401b-456a-93d3-cb99227f3a21" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.884831 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c584017-f51e-4001-b5b6-dacefc2d7658" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.884840 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c584017-f51e-4001-b5b6-dacefc2d7658" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.884867 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01603e06-2096-4686-934f-59aade63c30d" containerName="nova-kuttl-api-api" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.884877 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="01603e06-2096-4686-934f-59aade63c30d" containerName="nova-kuttl-api-api" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.884892 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602c933e-40d5-42c8-8319-960566e74b61" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.884900 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="602c933e-40d5-42c8-8319-960566e74b61" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.884911 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="898828a7-a3cd-4455-b719-15d886f937a4" containerName="nova-kuttl-metadata-log" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.884918 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="898828a7-a3cd-4455-b719-15d886f937a4" containerName="nova-kuttl-metadata-log" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.884946 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1abe3dd-6930-44ad-80c7-de3757265d02" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.884954 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1abe3dd-6930-44ad-80c7-de3757265d02" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.884986 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c946efaf-f2a6-48b6-a52c-3fd537fc15f4" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.884993 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c946efaf-f2a6-48b6-a52c-3fd537fc15f4" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.885013 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51df781b-8436-4545-8e9a-c9770e35814b" containerName="nova-kuttl-api-api" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885023 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="51df781b-8436-4545-8e9a-c9770e35814b" containerName="nova-kuttl-api-api" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.885032 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210ccb3b-41d2-4166-8326-0d44f933bd23" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885039 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="210ccb3b-41d2-4166-8326-0d44f933bd23" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.885058 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c7d69a-7abb-4abe-8954-874bcd3c8f33" containerName="nova-kuttl-api-log" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885071 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c7d69a-7abb-4abe-8954-874bcd3c8f33" containerName="nova-kuttl-api-log" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.885097 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df824f05-4250-4af1-a208-e4d1233d297f" containerName="nova-kuttl-metadata-log" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885110 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="df824f05-4250-4af1-a208-e4d1233d297f" containerName="nova-kuttl-metadata-log" Jan 27 13:36:26 crc kubenswrapper[4786]: E0127 13:36:26.885142 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="898828a7-a3cd-4455-b719-15d886f937a4" containerName="nova-kuttl-metadata-metadata" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885151 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="898828a7-a3cd-4455-b719-15d886f937a4" containerName="nova-kuttl-metadata-metadata" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885733 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c584017-f51e-4001-b5b6-dacefc2d7658" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885748 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="01603e06-2096-4686-934f-59aade63c30d" containerName="nova-kuttl-api-api" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885774 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="51df781b-8436-4545-8e9a-c9770e35814b" containerName="nova-kuttl-api-log" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885792 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="602c933e-40d5-42c8-8319-960566e74b61" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885809 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="01603e06-2096-4686-934f-59aade63c30d" containerName="nova-kuttl-api-log" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885826 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="898828a7-a3cd-4455-b719-15d886f937a4" containerName="nova-kuttl-metadata-metadata" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885842 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="898828a7-a3cd-4455-b719-15d886f937a4" containerName="nova-kuttl-metadata-log" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885861 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1abe3dd-6930-44ad-80c7-de3757265d02" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885875 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c7d69a-7abb-4abe-8954-874bcd3c8f33" containerName="nova-kuttl-api-log" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885893 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="df824f05-4250-4af1-a208-e4d1233d297f" containerName="nova-kuttl-metadata-log" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885901 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="62afa32f-2cf3-4581-8db3-0a4dd0d70545" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885927 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="df824f05-4250-4af1-a208-e4d1233d297f" containerName="nova-kuttl-metadata-metadata" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885937 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="210ccb3b-41d2-4166-8326-0d44f933bd23" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885953 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="1247ecc9-9177-446b-b939-3d158d4d0cd0" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885980 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c7d69a-7abb-4abe-8954-874bcd3c8f33" containerName="nova-kuttl-api-api" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.885997 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="51df781b-8436-4545-8e9a-c9770e35814b" containerName="nova-kuttl-api-api" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.886012 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c946efaf-f2a6-48b6-a52c-3fd537fc15f4" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.886027 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f7b9c2-401b-456a-93d3-cb99227f3a21" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.886959 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell032e8-account-delete-r44mb" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.919035 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell032e8-account-delete-r44mb"] Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.972217 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.972496 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="9de43c40-ccfa-41ef-ae49-bb23e5a71c45" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://fda910f767d34ceca6fd886b75c30a05806b9bcf8a84b4fc3bbd8530d881cdc2" gracePeriod=30 Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.980682 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5c5d62c-579c-4649-9283-771b9f8d61d5-operator-scripts\") pod \"novacell032e8-account-delete-r44mb\" (UID: \"c5c5d62c-579c-4649-9283-771b9f8d61d5\") " pod="nova-kuttl-default/novacell032e8-account-delete-r44mb" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.980767 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgkjk\" (UniqueName: \"kubernetes.io/projected/c5c5d62c-579c-4649-9283-771b9f8d61d5-kube-api-access-lgkjk\") pod \"novacell032e8-account-delete-r44mb\" (UID: \"c5c5d62c-579c-4649-9283-771b9f8d61d5\") " pod="nova-kuttl-default/novacell032e8-account-delete-r44mb" Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.988828 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapid9c7-account-delete-p9v4p"] Jan 27 13:36:26 crc kubenswrapper[4786]: I0127 13:36:26.990450 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapid9c7-account-delete-p9v4p" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.001439 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapid9c7-account-delete-p9v4p"] Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.077342 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell13f48-account-delete-bx2v2"] Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.078301 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell13f48-account-delete-bx2v2" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.082478 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/105e08bd-3b17-43c8-86c3-e503a0103227-operator-scripts\") pod \"novaapid9c7-account-delete-p9v4p\" (UID: \"105e08bd-3b17-43c8-86c3-e503a0103227\") " pod="nova-kuttl-default/novaapid9c7-account-delete-p9v4p" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.087856 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5c5d62c-579c-4649-9283-771b9f8d61d5-operator-scripts\") pod \"novacell032e8-account-delete-r44mb\" (UID: \"c5c5d62c-579c-4649-9283-771b9f8d61d5\") " pod="nova-kuttl-default/novacell032e8-account-delete-r44mb" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.088088 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pncw\" (UniqueName: \"kubernetes.io/projected/105e08bd-3b17-43c8-86c3-e503a0103227-kube-api-access-4pncw\") pod \"novaapid9c7-account-delete-p9v4p\" (UID: \"105e08bd-3b17-43c8-86c3-e503a0103227\") " pod="nova-kuttl-default/novaapid9c7-account-delete-p9v4p" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.088229 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgkjk\" (UniqueName: \"kubernetes.io/projected/c5c5d62c-579c-4649-9283-771b9f8d61d5-kube-api-access-lgkjk\") pod \"novacell032e8-account-delete-r44mb\" (UID: \"c5c5d62c-579c-4649-9283-771b9f8d61d5\") " pod="nova-kuttl-default/novacell032e8-account-delete-r44mb" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.088144 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell13f48-account-delete-bx2v2"] Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.089103 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5c5d62c-579c-4649-9283-771b9f8d61d5-operator-scripts\") pod \"novacell032e8-account-delete-r44mb\" (UID: \"c5c5d62c-579c-4649-9283-771b9f8d61d5\") " pod="nova-kuttl-default/novacell032e8-account-delete-r44mb" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.117496 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgkjk\" (UniqueName: \"kubernetes.io/projected/c5c5d62c-579c-4649-9283-771b9f8d61d5-kube-api-access-lgkjk\") pod \"novacell032e8-account-delete-r44mb\" (UID: \"c5c5d62c-579c-4649-9283-771b9f8d61d5\") " pod="nova-kuttl-default/novacell032e8-account-delete-r44mb" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.128109 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"f1abe3dd-6930-44ad-80c7-de3757265d02","Type":"ContainerDied","Data":"c2b818d5abaeb2edb2acea50e7c034e7f7561e09c1791817468b72f9016c9e00"} Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.128176 4786 scope.go:117] "RemoveContainer" containerID="79f500f914e78ea163f0fcaea04e0c1859fe73ef1f570f8ef0fc67b224e87242" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.128339 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.161229 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6"] Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.174676 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jrnz6"] Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.184273 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz"] Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.189635 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pncw\" (UniqueName: \"kubernetes.io/projected/105e08bd-3b17-43c8-86c3-e503a0103227-kube-api-access-4pncw\") pod \"novaapid9c7-account-delete-p9v4p\" (UID: \"105e08bd-3b17-43c8-86c3-e503a0103227\") " pod="nova-kuttl-default/novaapid9c7-account-delete-p9v4p" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.189696 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43de741a-7009-450e-ab6a-45475015fed0-operator-scripts\") pod \"novacell13f48-account-delete-bx2v2\" (UID: \"43de741a-7009-450e-ab6a-45475015fed0\") " pod="nova-kuttl-default/novacell13f48-account-delete-bx2v2" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.189725 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgcnl\" (UniqueName: \"kubernetes.io/projected/43de741a-7009-450e-ab6a-45475015fed0-kube-api-access-zgcnl\") pod \"novacell13f48-account-delete-bx2v2\" (UID: \"43de741a-7009-450e-ab6a-45475015fed0\") " pod="nova-kuttl-default/novacell13f48-account-delete-bx2v2" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.189745 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/105e08bd-3b17-43c8-86c3-e503a0103227-operator-scripts\") pod \"novaapid9c7-account-delete-p9v4p\" (UID: \"105e08bd-3b17-43c8-86c3-e503a0103227\") " pod="nova-kuttl-default/novaapid9c7-account-delete-p9v4p" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.190505 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/105e08bd-3b17-43c8-86c3-e503a0103227-operator-scripts\") pod \"novaapid9c7-account-delete-p9v4p\" (UID: \"105e08bd-3b17-43c8-86c3-e503a0103227\") " pod="nova-kuttl-default/novaapid9c7-account-delete-p9v4p" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.192893 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-4n2dz"] Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.201115 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.209256 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.229290 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell032e8-account-delete-r44mb" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.248660 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pncw\" (UniqueName: \"kubernetes.io/projected/105e08bd-3b17-43c8-86c3-e503a0103227-kube-api-access-4pncw\") pod \"novaapid9c7-account-delete-p9v4p\" (UID: \"105e08bd-3b17-43c8-86c3-e503a0103227\") " pod="nova-kuttl-default/novaapid9c7-account-delete-p9v4p" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.290915 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43de741a-7009-450e-ab6a-45475015fed0-operator-scripts\") pod \"novacell13f48-account-delete-bx2v2\" (UID: \"43de741a-7009-450e-ab6a-45475015fed0\") " pod="nova-kuttl-default/novacell13f48-account-delete-bx2v2" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.290972 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgcnl\" (UniqueName: \"kubernetes.io/projected/43de741a-7009-450e-ab6a-45475015fed0-kube-api-access-zgcnl\") pod \"novacell13f48-account-delete-bx2v2\" (UID: \"43de741a-7009-450e-ab6a-45475015fed0\") " pod="nova-kuttl-default/novacell13f48-account-delete-bx2v2" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.291727 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43de741a-7009-450e-ab6a-45475015fed0-operator-scripts\") pod \"novacell13f48-account-delete-bx2v2\" (UID: \"43de741a-7009-450e-ab6a-45475015fed0\") " pod="nova-kuttl-default/novacell13f48-account-delete-bx2v2" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.319497 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgcnl\" (UniqueName: \"kubernetes.io/projected/43de741a-7009-450e-ab6a-45475015fed0-kube-api-access-zgcnl\") pod \"novacell13f48-account-delete-bx2v2\" (UID: \"43de741a-7009-450e-ab6a-45475015fed0\") " pod="nova-kuttl-default/novacell13f48-account-delete-bx2v2" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.365251 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="9de43c40-ccfa-41ef-ae49-bb23e5a71c45" containerName="nova-kuttl-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"http://10.217.0.159:6080/vnc_lite.html\": dial tcp 10.217.0.159:6080: connect: connection refused" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.372475 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapid9c7-account-delete-p9v4p" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.413878 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell13f48-account-delete-bx2v2" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.480773 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21dee95-36c0-4f2a-983d-f7b8b8b383a9" path="/var/lib/kubelet/pods/c21dee95-36c0-4f2a-983d-f7b8b8b383a9/volumes" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.481794 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da9c38f7-ea97-4faa-922f-a3087cee1b21" path="/var/lib/kubelet/pods/da9c38f7-ea97-4faa-922f-a3087cee1b21/volumes" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.482406 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e67e53fe-61a8-4a3d-aa25-75530bca5677" path="/var/lib/kubelet/pods/e67e53fe-61a8-4a3d-aa25-75530bca5677/volumes" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.483321 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1abe3dd-6930-44ad-80c7-de3757265d02" path="/var/lib/kubelet/pods/f1abe3dd-6930-44ad-80c7-de3757265d02/volumes" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.484580 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbab3f0a-3fee-457b-a387-1394a96d3847" path="/var/lib/kubelet/pods/fbab3f0a-3fee-457b-a387-1394a96d3847/volumes" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.721971 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell032e8-account-delete-r44mb"] Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.859276 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapid9c7-account-delete-p9v4p"] Jan 27 13:36:27 crc kubenswrapper[4786]: W0127 13:36:27.867689 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod105e08bd_3b17_43c8_86c3_e503a0103227.slice/crio-60e3ad74611112e3714dee1941fa21c73a79fc9999fb5c8f887942c8090f29bf WatchSource:0}: Error finding container 60e3ad74611112e3714dee1941fa21c73a79fc9999fb5c8f887942c8090f29bf: Status 404 returned error can't find the container with id 60e3ad74611112e3714dee1941fa21c73a79fc9999fb5c8f887942c8090f29bf Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.883349 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.905252 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f6fg\" (UniqueName: \"kubernetes.io/projected/9de43c40-ccfa-41ef-ae49-bb23e5a71c45-kube-api-access-9f6fg\") pod \"9de43c40-ccfa-41ef-ae49-bb23e5a71c45\" (UID: \"9de43c40-ccfa-41ef-ae49-bb23e5a71c45\") " Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.905374 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9de43c40-ccfa-41ef-ae49-bb23e5a71c45-config-data\") pod \"9de43c40-ccfa-41ef-ae49-bb23e5a71c45\" (UID: \"9de43c40-ccfa-41ef-ae49-bb23e5a71c45\") " Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.916660 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9de43c40-ccfa-41ef-ae49-bb23e5a71c45-kube-api-access-9f6fg" (OuterVolumeSpecName: "kube-api-access-9f6fg") pod "9de43c40-ccfa-41ef-ae49-bb23e5a71c45" (UID: "9de43c40-ccfa-41ef-ae49-bb23e5a71c45"). InnerVolumeSpecName "kube-api-access-9f6fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.940400 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell13f48-account-delete-bx2v2"] Jan 27 13:36:27 crc kubenswrapper[4786]: I0127 13:36:27.942483 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9de43c40-ccfa-41ef-ae49-bb23e5a71c45-config-data" (OuterVolumeSpecName: "config-data") pod "9de43c40-ccfa-41ef-ae49-bb23e5a71c45" (UID: "9de43c40-ccfa-41ef-ae49-bb23e5a71c45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.008291 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9de43c40-ccfa-41ef-ae49-bb23e5a71c45-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.008318 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f6fg\" (UniqueName: \"kubernetes.io/projected/9de43c40-ccfa-41ef-ae49-bb23e5a71c45-kube-api-access-9f6fg\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.141866 4786 generic.go:334] "Generic (PLEG): container finished" podID="9de43c40-ccfa-41ef-ae49-bb23e5a71c45" containerID="fda910f767d34ceca6fd886b75c30a05806b9bcf8a84b4fc3bbd8530d881cdc2" exitCode=0 Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.141933 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.141974 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"9de43c40-ccfa-41ef-ae49-bb23e5a71c45","Type":"ContainerDied","Data":"fda910f767d34ceca6fd886b75c30a05806b9bcf8a84b4fc3bbd8530d881cdc2"} Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.143177 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"9de43c40-ccfa-41ef-ae49-bb23e5a71c45","Type":"ContainerDied","Data":"1678d1fc08d9bc95f18153edc23ee2acce9a4feba9da24d25b3ea738dc5591b1"} Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.143204 4786 scope.go:117] "RemoveContainer" containerID="fda910f767d34ceca6fd886b75c30a05806b9bcf8a84b4fc3bbd8530d881cdc2" Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.144755 4786 generic.go:334] "Generic (PLEG): container finished" podID="c5c5d62c-579c-4649-9283-771b9f8d61d5" containerID="8b9cbc11753b921be12e241a470e6262e65a5b14ca7d199183aa5fca06cf8e5a" exitCode=0 Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.144825 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell032e8-account-delete-r44mb" event={"ID":"c5c5d62c-579c-4649-9283-771b9f8d61d5","Type":"ContainerDied","Data":"8b9cbc11753b921be12e241a470e6262e65a5b14ca7d199183aa5fca06cf8e5a"} Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.144851 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell032e8-account-delete-r44mb" event={"ID":"c5c5d62c-579c-4649-9283-771b9f8d61d5","Type":"ContainerStarted","Data":"4956e91a473d5ae49110225d42d84c6bf371dcd0f287adb894552e6be9f2e6e2"} Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.146430 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell13f48-account-delete-bx2v2" event={"ID":"43de741a-7009-450e-ab6a-45475015fed0","Type":"ContainerStarted","Data":"5dd0eaa5f48cde69d4caa5058c8b1b5d4d7ed7fc75c93286097b843daab9eb19"} Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.146457 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell13f48-account-delete-bx2v2" event={"ID":"43de741a-7009-450e-ab6a-45475015fed0","Type":"ContainerStarted","Data":"51881b79adfb25694ac5e657c8364cb0f8c3e9674e6e5e58d4c10adb59f3c8c8"} Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.148107 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapid9c7-account-delete-p9v4p" event={"ID":"105e08bd-3b17-43c8-86c3-e503a0103227","Type":"ContainerStarted","Data":"85663c4bbed577d667088115352b6b9abaa15f9794edc68db0e7f6a8701e7831"} Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.152812 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapid9c7-account-delete-p9v4p" event={"ID":"105e08bd-3b17-43c8-86c3-e503a0103227","Type":"ContainerStarted","Data":"60e3ad74611112e3714dee1941fa21c73a79fc9999fb5c8f887942c8090f29bf"} Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.163903 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novaapid9c7-account-delete-p9v4p" podStartSLOduration=2.163888925 podStartE2EDuration="2.163888925s" podCreationTimestamp="2026-01-27 13:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:36:28.160199475 +0000 UTC m=+1771.370813584" watchObservedRunningTime="2026-01-27 13:36:28.163888925 +0000 UTC m=+1771.374503044" Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.183721 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novacell13f48-account-delete-bx2v2" podStartSLOduration=1.183697647 podStartE2EDuration="1.183697647s" podCreationTimestamp="2026-01-27 13:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:36:28.172983454 +0000 UTC m=+1771.383597573" watchObservedRunningTime="2026-01-27 13:36:28.183697647 +0000 UTC m=+1771.394311766" Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.221298 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.228089 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.230646 4786 scope.go:117] "RemoveContainer" containerID="fda910f767d34ceca6fd886b75c30a05806b9bcf8a84b4fc3bbd8530d881cdc2" Jan 27 13:36:28 crc kubenswrapper[4786]: E0127 13:36:28.232322 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fda910f767d34ceca6fd886b75c30a05806b9bcf8a84b4fc3bbd8530d881cdc2\": container with ID starting with fda910f767d34ceca6fd886b75c30a05806b9bcf8a84b4fc3bbd8530d881cdc2 not found: ID does not exist" containerID="fda910f767d34ceca6fd886b75c30a05806b9bcf8a84b4fc3bbd8530d881cdc2" Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.232370 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda910f767d34ceca6fd886b75c30a05806b9bcf8a84b4fc3bbd8530d881cdc2"} err="failed to get container status \"fda910f767d34ceca6fd886b75c30a05806b9bcf8a84b4fc3bbd8530d881cdc2\": rpc error: code = NotFound desc = could not find container \"fda910f767d34ceca6fd886b75c30a05806b9bcf8a84b4fc3bbd8530d881cdc2\": container with ID starting with fda910f767d34ceca6fd886b75c30a05806b9bcf8a84b4fc3bbd8530d881cdc2 not found: ID does not exist" Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.548077 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="2ea95db7-d568-46c3-8874-98748692c4fe" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.168:8775/\": read tcp 10.217.0.2:59534->10.217.0.168:8775: read: connection reset by peer" Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.548099 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="2ea95db7-d568-46c3-8874-98748692c4fe" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.168:8775/\": read tcp 10.217.0.2:59540->10.217.0.168:8775: read: connection reset by peer" Jan 27 13:36:28 crc kubenswrapper[4786]: I0127 13:36:28.930279 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.026468 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea95db7-d568-46c3-8874-98748692c4fe-config-data\") pod \"2ea95db7-d568-46c3-8874-98748692c4fe\" (UID: \"2ea95db7-d568-46c3-8874-98748692c4fe\") " Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.027167 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmv22\" (UniqueName: \"kubernetes.io/projected/2ea95db7-d568-46c3-8874-98748692c4fe-kube-api-access-wmv22\") pod \"2ea95db7-d568-46c3-8874-98748692c4fe\" (UID: \"2ea95db7-d568-46c3-8874-98748692c4fe\") " Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.027219 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea95db7-d568-46c3-8874-98748692c4fe-logs\") pod \"2ea95db7-d568-46c3-8874-98748692c4fe\" (UID: \"2ea95db7-d568-46c3-8874-98748692c4fe\") " Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.027740 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea95db7-d568-46c3-8874-98748692c4fe-logs" (OuterVolumeSpecName: "logs") pod "2ea95db7-d568-46c3-8874-98748692c4fe" (UID: "2ea95db7-d568-46c3-8874-98748692c4fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.032280 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea95db7-d568-46c3-8874-98748692c4fe-kube-api-access-wmv22" (OuterVolumeSpecName: "kube-api-access-wmv22") pod "2ea95db7-d568-46c3-8874-98748692c4fe" (UID: "2ea95db7-d568-46c3-8874-98748692c4fe"). InnerVolumeSpecName "kube-api-access-wmv22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.062965 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea95db7-d568-46c3-8874-98748692c4fe-config-data" (OuterVolumeSpecName: "config-data") pod "2ea95db7-d568-46c3-8874-98748692c4fe" (UID: "2ea95db7-d568-46c3-8874-98748692c4fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.128837 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea95db7-d568-46c3-8874-98748692c4fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.128875 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmv22\" (UniqueName: \"kubernetes.io/projected/2ea95db7-d568-46c3-8874-98748692c4fe-kube-api-access-wmv22\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.128889 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea95db7-d568-46c3-8874-98748692c4fe-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.161033 4786 generic.go:334] "Generic (PLEG): container finished" podID="105e08bd-3b17-43c8-86c3-e503a0103227" containerID="85663c4bbed577d667088115352b6b9abaa15f9794edc68db0e7f6a8701e7831" exitCode=0 Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.161112 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapid9c7-account-delete-p9v4p" event={"ID":"105e08bd-3b17-43c8-86c3-e503a0103227","Type":"ContainerDied","Data":"85663c4bbed577d667088115352b6b9abaa15f9794edc68db0e7f6a8701e7831"} Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.172102 4786 generic.go:334] "Generic (PLEG): container finished" podID="2ea95db7-d568-46c3-8874-98748692c4fe" containerID="ba56aca22a2e60ed77549cca35bf7bd0cea1e10bb1b49792edabcb6881995ba3" exitCode=0 Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.172163 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2ea95db7-d568-46c3-8874-98748692c4fe","Type":"ContainerDied","Data":"ba56aca22a2e60ed77549cca35bf7bd0cea1e10bb1b49792edabcb6881995ba3"} Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.172189 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2ea95db7-d568-46c3-8874-98748692c4fe","Type":"ContainerDied","Data":"e7de9818126b48e425ef32264a59b9dd0b179c62792ab1459874ea7fa290f66e"} Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.172206 4786 scope.go:117] "RemoveContainer" containerID="ba56aca22a2e60ed77549cca35bf7bd0cea1e10bb1b49792edabcb6881995ba3" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.172735 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.186340 4786 generic.go:334] "Generic (PLEG): container finished" podID="43de741a-7009-450e-ab6a-45475015fed0" containerID="5dd0eaa5f48cde69d4caa5058c8b1b5d4d7ed7fc75c93286097b843daab9eb19" exitCode=0 Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.186772 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell13f48-account-delete-bx2v2" event={"ID":"43de741a-7009-450e-ab6a-45475015fed0","Type":"ContainerDied","Data":"5dd0eaa5f48cde69d4caa5058c8b1b5d4d7ed7fc75c93286097b843daab9eb19"} Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.248389 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.251839 4786 scope.go:117] "RemoveContainer" containerID="104a660849b90da9aa6f1ac9393aefd431f96d0207343c6ddba13d10403447c6" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.255699 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.300481 4786 scope.go:117] "RemoveContainer" containerID="ba56aca22a2e60ed77549cca35bf7bd0cea1e10bb1b49792edabcb6881995ba3" Jan 27 13:36:29 crc kubenswrapper[4786]: E0127 13:36:29.300935 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba56aca22a2e60ed77549cca35bf7bd0cea1e10bb1b49792edabcb6881995ba3\": container with ID starting with ba56aca22a2e60ed77549cca35bf7bd0cea1e10bb1b49792edabcb6881995ba3 not found: ID does not exist" containerID="ba56aca22a2e60ed77549cca35bf7bd0cea1e10bb1b49792edabcb6881995ba3" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.300974 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba56aca22a2e60ed77549cca35bf7bd0cea1e10bb1b49792edabcb6881995ba3"} err="failed to get container status \"ba56aca22a2e60ed77549cca35bf7bd0cea1e10bb1b49792edabcb6881995ba3\": rpc error: code = NotFound desc = could not find container \"ba56aca22a2e60ed77549cca35bf7bd0cea1e10bb1b49792edabcb6881995ba3\": container with ID starting with ba56aca22a2e60ed77549cca35bf7bd0cea1e10bb1b49792edabcb6881995ba3 not found: ID does not exist" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.301000 4786 scope.go:117] "RemoveContainer" containerID="104a660849b90da9aa6f1ac9393aefd431f96d0207343c6ddba13d10403447c6" Jan 27 13:36:29 crc kubenswrapper[4786]: E0127 13:36:29.301991 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"104a660849b90da9aa6f1ac9393aefd431f96d0207343c6ddba13d10403447c6\": container with ID starting with 104a660849b90da9aa6f1ac9393aefd431f96d0207343c6ddba13d10403447c6 not found: ID does not exist" containerID="104a660849b90da9aa6f1ac9393aefd431f96d0207343c6ddba13d10403447c6" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.302030 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"104a660849b90da9aa6f1ac9393aefd431f96d0207343c6ddba13d10403447c6"} err="failed to get container status \"104a660849b90da9aa6f1ac9393aefd431f96d0207343c6ddba13d10403447c6\": rpc error: code = NotFound desc = could not find container \"104a660849b90da9aa6f1ac9393aefd431f96d0207343c6ddba13d10403447c6\": container with ID starting with 104a660849b90da9aa6f1ac9393aefd431f96d0207343c6ddba13d10403447c6 not found: ID does not exist" Jan 27 13:36:29 crc kubenswrapper[4786]: E0127 13:36:29.443639 4786 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.467012 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:36:29 crc kubenswrapper[4786]: E0127 13:36:29.467238 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.478145 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea95db7-d568-46c3-8874-98748692c4fe" path="/var/lib/kubelet/pods/2ea95db7-d568-46c3-8874-98748692c4fe/volumes" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.479221 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9de43c40-ccfa-41ef-ae49-bb23e5a71c45" path="/var/lib/kubelet/pods/9de43c40-ccfa-41ef-ae49-bb23e5a71c45/volumes" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.564867 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell032e8-account-delete-r44mb" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.581635 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.635578 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5c5d62c-579c-4649-9283-771b9f8d61d5-operator-scripts\") pod \"c5c5d62c-579c-4649-9283-771b9f8d61d5\" (UID: \"c5c5d62c-579c-4649-9283-771b9f8d61d5\") " Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.635682 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d62b7637-2d50-41c6-8aaf-5f741b49e241-config-data\") pod \"d62b7637-2d50-41c6-8aaf-5f741b49e241\" (UID: \"d62b7637-2d50-41c6-8aaf-5f741b49e241\") " Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.635775 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxdxw\" (UniqueName: \"kubernetes.io/projected/d62b7637-2d50-41c6-8aaf-5f741b49e241-kube-api-access-sxdxw\") pod \"d62b7637-2d50-41c6-8aaf-5f741b49e241\" (UID: \"d62b7637-2d50-41c6-8aaf-5f741b49e241\") " Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.635831 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgkjk\" (UniqueName: \"kubernetes.io/projected/c5c5d62c-579c-4649-9283-771b9f8d61d5-kube-api-access-lgkjk\") pod \"c5c5d62c-579c-4649-9283-771b9f8d61d5\" (UID: \"c5c5d62c-579c-4649-9283-771b9f8d61d5\") " Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.636505 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5c5d62c-579c-4649-9283-771b9f8d61d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5c5d62c-579c-4649-9283-771b9f8d61d5" (UID: "c5c5d62c-579c-4649-9283-771b9f8d61d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.639921 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d62b7637-2d50-41c6-8aaf-5f741b49e241-kube-api-access-sxdxw" (OuterVolumeSpecName: "kube-api-access-sxdxw") pod "d62b7637-2d50-41c6-8aaf-5f741b49e241" (UID: "d62b7637-2d50-41c6-8aaf-5f741b49e241"). InnerVolumeSpecName "kube-api-access-sxdxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.639976 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c5d62c-579c-4649-9283-771b9f8d61d5-kube-api-access-lgkjk" (OuterVolumeSpecName: "kube-api-access-lgkjk") pod "c5c5d62c-579c-4649-9283-771b9f8d61d5" (UID: "c5c5d62c-579c-4649-9283-771b9f8d61d5"). InnerVolumeSpecName "kube-api-access-lgkjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.656693 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d62b7637-2d50-41c6-8aaf-5f741b49e241-config-data" (OuterVolumeSpecName: "config-data") pod "d62b7637-2d50-41c6-8aaf-5f741b49e241" (UID: "d62b7637-2d50-41c6-8aaf-5f741b49e241"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.737740 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxdxw\" (UniqueName: \"kubernetes.io/projected/d62b7637-2d50-41c6-8aaf-5f741b49e241-kube-api-access-sxdxw\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.737786 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgkjk\" (UniqueName: \"kubernetes.io/projected/c5c5d62c-579c-4649-9283-771b9f8d61d5-kube-api-access-lgkjk\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.737797 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5c5d62c-579c-4649-9283-771b9f8d61d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:29 crc kubenswrapper[4786]: I0127 13:36:29.737806 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d62b7637-2d50-41c6-8aaf-5f741b49e241-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.197480 4786 generic.go:334] "Generic (PLEG): container finished" podID="d62b7637-2d50-41c6-8aaf-5f741b49e241" containerID="a88465c939ad7b809b684a203530f4aa809458dffe9b0256f104de7cecbccb34" exitCode=0 Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.197535 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.197573 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"d62b7637-2d50-41c6-8aaf-5f741b49e241","Type":"ContainerDied","Data":"a88465c939ad7b809b684a203530f4aa809458dffe9b0256f104de7cecbccb34"} Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.198069 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"d62b7637-2d50-41c6-8aaf-5f741b49e241","Type":"ContainerDied","Data":"7f99f03ffdb6db5905f2ffe89db57cf1540efc0681f825c0925ce29db5ae014d"} Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.198120 4786 scope.go:117] "RemoveContainer" containerID="a88465c939ad7b809b684a203530f4aa809458dffe9b0256f104de7cecbccb34" Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.199867 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell032e8-account-delete-r44mb" event={"ID":"c5c5d62c-579c-4649-9283-771b9f8d61d5","Type":"ContainerDied","Data":"4956e91a473d5ae49110225d42d84c6bf371dcd0f287adb894552e6be9f2e6e2"} Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.199892 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4956e91a473d5ae49110225d42d84c6bf371dcd0f287adb894552e6be9f2e6e2" Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.199974 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell032e8-account-delete-r44mb" Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.217584 4786 scope.go:117] "RemoveContainer" containerID="a88465c939ad7b809b684a203530f4aa809458dffe9b0256f104de7cecbccb34" Jan 27 13:36:30 crc kubenswrapper[4786]: E0127 13:36:30.218649 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a88465c939ad7b809b684a203530f4aa809458dffe9b0256f104de7cecbccb34\": container with ID starting with a88465c939ad7b809b684a203530f4aa809458dffe9b0256f104de7cecbccb34 not found: ID does not exist" containerID="a88465c939ad7b809b684a203530f4aa809458dffe9b0256f104de7cecbccb34" Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.218717 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a88465c939ad7b809b684a203530f4aa809458dffe9b0256f104de7cecbccb34"} err="failed to get container status \"a88465c939ad7b809b684a203530f4aa809458dffe9b0256f104de7cecbccb34\": rpc error: code = NotFound desc = could not find container \"a88465c939ad7b809b684a203530f4aa809458dffe9b0256f104de7cecbccb34\": container with ID starting with a88465c939ad7b809b684a203530f4aa809458dffe9b0256f104de7cecbccb34 not found: ID does not exist" Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.242004 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.248270 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.585245 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell13f48-account-delete-bx2v2" Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.608701 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapid9c7-account-delete-p9v4p" Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.652033 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/105e08bd-3b17-43c8-86c3-e503a0103227-operator-scripts\") pod \"105e08bd-3b17-43c8-86c3-e503a0103227\" (UID: \"105e08bd-3b17-43c8-86c3-e503a0103227\") " Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.652272 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43de741a-7009-450e-ab6a-45475015fed0-operator-scripts\") pod \"43de741a-7009-450e-ab6a-45475015fed0\" (UID: \"43de741a-7009-450e-ab6a-45475015fed0\") " Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.652335 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgcnl\" (UniqueName: \"kubernetes.io/projected/43de741a-7009-450e-ab6a-45475015fed0-kube-api-access-zgcnl\") pod \"43de741a-7009-450e-ab6a-45475015fed0\" (UID: \"43de741a-7009-450e-ab6a-45475015fed0\") " Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.652395 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pncw\" (UniqueName: \"kubernetes.io/projected/105e08bd-3b17-43c8-86c3-e503a0103227-kube-api-access-4pncw\") pod \"105e08bd-3b17-43c8-86c3-e503a0103227\" (UID: \"105e08bd-3b17-43c8-86c3-e503a0103227\") " Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.652873 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/105e08bd-3b17-43c8-86c3-e503a0103227-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "105e08bd-3b17-43c8-86c3-e503a0103227" (UID: "105e08bd-3b17-43c8-86c3-e503a0103227"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.652997 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43de741a-7009-450e-ab6a-45475015fed0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "43de741a-7009-450e-ab6a-45475015fed0" (UID: "43de741a-7009-450e-ab6a-45475015fed0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.658985 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43de741a-7009-450e-ab6a-45475015fed0-kube-api-access-zgcnl" (OuterVolumeSpecName: "kube-api-access-zgcnl") pod "43de741a-7009-450e-ab6a-45475015fed0" (UID: "43de741a-7009-450e-ab6a-45475015fed0"). InnerVolumeSpecName "kube-api-access-zgcnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.659772 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/105e08bd-3b17-43c8-86c3-e503a0103227-kube-api-access-4pncw" (OuterVolumeSpecName: "kube-api-access-4pncw") pod "105e08bd-3b17-43c8-86c3-e503a0103227" (UID: "105e08bd-3b17-43c8-86c3-e503a0103227"). InnerVolumeSpecName "kube-api-access-4pncw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.753614 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgcnl\" (UniqueName: \"kubernetes.io/projected/43de741a-7009-450e-ab6a-45475015fed0-kube-api-access-zgcnl\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.753642 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pncw\" (UniqueName: \"kubernetes.io/projected/105e08bd-3b17-43c8-86c3-e503a0103227-kube-api-access-4pncw\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.753651 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/105e08bd-3b17-43c8-86c3-e503a0103227-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:30 crc kubenswrapper[4786]: I0127 13:36:30.753659 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43de741a-7009-450e-ab6a-45475015fed0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:31 crc kubenswrapper[4786]: I0127 13:36:31.216357 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell13f48-account-delete-bx2v2" Jan 27 13:36:31 crc kubenswrapper[4786]: I0127 13:36:31.218657 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell13f48-account-delete-bx2v2" event={"ID":"43de741a-7009-450e-ab6a-45475015fed0","Type":"ContainerDied","Data":"51881b79adfb25694ac5e657c8364cb0f8c3e9674e6e5e58d4c10adb59f3c8c8"} Jan 27 13:36:31 crc kubenswrapper[4786]: I0127 13:36:31.218684 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51881b79adfb25694ac5e657c8364cb0f8c3e9674e6e5e58d4c10adb59f3c8c8" Jan 27 13:36:31 crc kubenswrapper[4786]: I0127 13:36:31.220000 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapid9c7-account-delete-p9v4p" Jan 27 13:36:31 crc kubenswrapper[4786]: I0127 13:36:31.219996 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapid9c7-account-delete-p9v4p" event={"ID":"105e08bd-3b17-43c8-86c3-e503a0103227","Type":"ContainerDied","Data":"60e3ad74611112e3714dee1941fa21c73a79fc9999fb5c8f887942c8090f29bf"} Jan 27 13:36:31 crc kubenswrapper[4786]: I0127 13:36:31.220130 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60e3ad74611112e3714dee1941fa21c73a79fc9999fb5c8f887942c8090f29bf" Jan 27 13:36:31 crc kubenswrapper[4786]: I0127 13:36:31.486839 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d62b7637-2d50-41c6-8aaf-5f741b49e241" path="/var/lib/kubelet/pods/d62b7637-2d50-41c6-8aaf-5f741b49e241/volumes" Jan 27 13:36:31 crc kubenswrapper[4786]: I0127 13:36:31.909951 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-75s89"] Jan 27 13:36:31 crc kubenswrapper[4786]: I0127 13:36:31.916996 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-75s89"] Jan 27 13:36:31 crc kubenswrapper[4786]: I0127 13:36:31.938754 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell032e8-account-delete-r44mb"] Jan 27 13:36:31 crc kubenswrapper[4786]: I0127 13:36:31.955047 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4"] Jan 27 13:36:31 crc kubenswrapper[4786]: I0127 13:36:31.967566 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell032e8-account-delete-r44mb"] Jan 27 13:36:31 crc kubenswrapper[4786]: I0127 13:36:31.980538 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-32e8-account-create-update-zxns4"] Jan 27 13:36:32 crc kubenswrapper[4786]: I0127 13:36:32.011019 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-q5zbn"] Jan 27 13:36:32 crc kubenswrapper[4786]: I0127 13:36:32.017599 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-q5zbn"] Jan 27 13:36:32 crc kubenswrapper[4786]: I0127 13:36:32.025397 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapid9c7-account-delete-p9v4p"] Jan 27 13:36:32 crc kubenswrapper[4786]: I0127 13:36:32.035190 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v"] Jan 27 13:36:32 crc kubenswrapper[4786]: I0127 13:36:32.043936 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapid9c7-account-delete-p9v4p"] Jan 27 13:36:32 crc kubenswrapper[4786]: I0127 13:36:32.053059 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-d9c7-account-create-update-jc66v"] Jan 27 13:36:32 crc kubenswrapper[4786]: I0127 13:36:32.104777 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-fx5r8"] Jan 27 13:36:32 crc kubenswrapper[4786]: I0127 13:36:32.111784 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-fx5r8"] Jan 27 13:36:32 crc kubenswrapper[4786]: I0127 13:36:32.124796 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell13f48-account-delete-bx2v2"] Jan 27 13:36:32 crc kubenswrapper[4786]: I0127 13:36:32.127832 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh"] Jan 27 13:36:32 crc kubenswrapper[4786]: I0127 13:36:32.134450 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell13f48-account-delete-bx2v2"] Jan 27 13:36:32 crc kubenswrapper[4786]: I0127 13:36:32.143098 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-3f48-account-create-update-jhxjh"] Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.474124 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="105e08bd-3b17-43c8-86c3-e503a0103227" path="/var/lib/kubelet/pods/105e08bd-3b17-43c8-86c3-e503a0103227/volumes" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.475406 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12763f09-4dc2-4040-a41d-69da50af4ad8" path="/var/lib/kubelet/pods/12763f09-4dc2-4040-a41d-69da50af4ad8/volumes" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.475965 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22a9a4ad-6d8d-48f7-9b01-625b9a2f289e" path="/var/lib/kubelet/pods/22a9a4ad-6d8d-48f7-9b01-625b9a2f289e/volumes" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.476448 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43de741a-7009-450e-ab6a-45475015fed0" path="/var/lib/kubelet/pods/43de741a-7009-450e-ab6a-45475015fed0/volumes" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.478224 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad89564-d8c3-47fe-a44d-5c26725192aa" path="/var/lib/kubelet/pods/4ad89564-d8c3-47fe-a44d-5c26725192aa/volumes" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.478713 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7266df78-7efe-4eaa-b6d0-303fde74806c" path="/var/lib/kubelet/pods/7266df78-7efe-4eaa-b6d0-303fde74806c/volumes" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.479270 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a42ba7b0-9548-4d04-93ac-4e55c86dfa07" path="/var/lib/kubelet/pods/a42ba7b0-9548-4d04-93ac-4e55c86dfa07/volumes" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.480253 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5c5d62c-579c-4649-9283-771b9f8d61d5" path="/var/lib/kubelet/pods/c5c5d62c-579c-4649-9283-771b9f8d61d5/volumes" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.480823 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3df647e-a748-4d18-933a-daaf2bfad557" path="/var/lib/kubelet/pods/f3df647e-a748-4d18-933a-daaf2bfad557/volumes" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.983374 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-jqqcj"] Jan 27 13:36:33 crc kubenswrapper[4786]: E0127 13:36:33.994567 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9de43c40-ccfa-41ef-ae49-bb23e5a71c45" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.994647 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9de43c40-ccfa-41ef-ae49-bb23e5a71c45" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 27 13:36:33 crc kubenswrapper[4786]: E0127 13:36:33.994662 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c5d62c-579c-4649-9283-771b9f8d61d5" containerName="mariadb-account-delete" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.994668 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c5d62c-579c-4649-9283-771b9f8d61d5" containerName="mariadb-account-delete" Jan 27 13:36:33 crc kubenswrapper[4786]: E0127 13:36:33.994679 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea95db7-d568-46c3-8874-98748692c4fe" containerName="nova-kuttl-metadata-metadata" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.994689 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea95db7-d568-46c3-8874-98748692c4fe" containerName="nova-kuttl-metadata-metadata" Jan 27 13:36:33 crc kubenswrapper[4786]: E0127 13:36:33.994702 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="105e08bd-3b17-43c8-86c3-e503a0103227" containerName="mariadb-account-delete" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.994708 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="105e08bd-3b17-43c8-86c3-e503a0103227" containerName="mariadb-account-delete" Jan 27 13:36:33 crc kubenswrapper[4786]: E0127 13:36:33.994735 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea95db7-d568-46c3-8874-98748692c4fe" containerName="nova-kuttl-metadata-log" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.994741 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea95db7-d568-46c3-8874-98748692c4fe" containerName="nova-kuttl-metadata-log" Jan 27 13:36:33 crc kubenswrapper[4786]: E0127 13:36:33.994765 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d62b7637-2d50-41c6-8aaf-5f741b49e241" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.994773 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="d62b7637-2d50-41c6-8aaf-5f741b49e241" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:36:33 crc kubenswrapper[4786]: E0127 13:36:33.994786 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43de741a-7009-450e-ab6a-45475015fed0" containerName="mariadb-account-delete" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.994792 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="43de741a-7009-450e-ab6a-45475015fed0" containerName="mariadb-account-delete" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.995174 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="43de741a-7009-450e-ab6a-45475015fed0" containerName="mariadb-account-delete" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.995188 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="d62b7637-2d50-41c6-8aaf-5f741b49e241" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.995206 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="9de43c40-ccfa-41ef-ae49-bb23e5a71c45" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.995226 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c5d62c-579c-4649-9283-771b9f8d61d5" containerName="mariadb-account-delete" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.995239 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="105e08bd-3b17-43c8-86c3-e503a0103227" containerName="mariadb-account-delete" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.995251 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea95db7-d568-46c3-8874-98748692c4fe" containerName="nova-kuttl-metadata-metadata" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.995262 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea95db7-d568-46c3-8874-98748692c4fe" containerName="nova-kuttl-metadata-log" Jan 27 13:36:33 crc kubenswrapper[4786]: I0127 13:36:33.996902 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-jqqcj" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.029711 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-jqqcj"] Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.088914 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-km65q"] Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.090160 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-km65q" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.095475 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-km65q"] Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.108655 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/020d5622-86fc-434d-95af-a38096706001-operator-scripts\") pod \"nova-api-db-create-jqqcj\" (UID: \"020d5622-86fc-434d-95af-a38096706001\") " pod="nova-kuttl-default/nova-api-db-create-jqqcj" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.108714 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlthg\" (UniqueName: \"kubernetes.io/projected/020d5622-86fc-434d-95af-a38096706001-kube-api-access-nlthg\") pod \"nova-api-db-create-jqqcj\" (UID: \"020d5622-86fc-434d-95af-a38096706001\") " pod="nova-kuttl-default/nova-api-db-create-jqqcj" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.190041 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-b196-account-create-update-tzbbp"] Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.190995 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-b196-account-create-update-tzbbp" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.195039 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.201716 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-b196-account-create-update-tzbbp"] Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.210479 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9c32b0d-83ad-46dd-b388-2723bde9de7f-operator-scripts\") pod \"nova-cell0-db-create-km65q\" (UID: \"f9c32b0d-83ad-46dd-b388-2723bde9de7f\") " pod="nova-kuttl-default/nova-cell0-db-create-km65q" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.210532 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlthg\" (UniqueName: \"kubernetes.io/projected/020d5622-86fc-434d-95af-a38096706001-kube-api-access-nlthg\") pod \"nova-api-db-create-jqqcj\" (UID: \"020d5622-86fc-434d-95af-a38096706001\") " pod="nova-kuttl-default/nova-api-db-create-jqqcj" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.210822 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dd9h\" (UniqueName: \"kubernetes.io/projected/f9c32b0d-83ad-46dd-b388-2723bde9de7f-kube-api-access-5dd9h\") pod \"nova-cell0-db-create-km65q\" (UID: \"f9c32b0d-83ad-46dd-b388-2723bde9de7f\") " pod="nova-kuttl-default/nova-cell0-db-create-km65q" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.210909 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/020d5622-86fc-434d-95af-a38096706001-operator-scripts\") pod \"nova-api-db-create-jqqcj\" (UID: \"020d5622-86fc-434d-95af-a38096706001\") " pod="nova-kuttl-default/nova-api-db-create-jqqcj" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.211669 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/020d5622-86fc-434d-95af-a38096706001-operator-scripts\") pod \"nova-api-db-create-jqqcj\" (UID: \"020d5622-86fc-434d-95af-a38096706001\") " pod="nova-kuttl-default/nova-api-db-create-jqqcj" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.247182 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlthg\" (UniqueName: \"kubernetes.io/projected/020d5622-86fc-434d-95af-a38096706001-kube-api-access-nlthg\") pod \"nova-api-db-create-jqqcj\" (UID: \"020d5622-86fc-434d-95af-a38096706001\") " pod="nova-kuttl-default/nova-api-db-create-jqqcj" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.285755 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-cm58n"] Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.286717 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-cm58n" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.292539 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-cm58n"] Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.311943 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/040cc9fb-7f10-44a2-93f2-8249f45a9a59-operator-scripts\") pod \"nova-cell1-db-create-cm58n\" (UID: \"040cc9fb-7f10-44a2-93f2-8249f45a9a59\") " pod="nova-kuttl-default/nova-cell1-db-create-cm58n" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.311997 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg7mt\" (UniqueName: \"kubernetes.io/projected/040cc9fb-7f10-44a2-93f2-8249f45a9a59-kube-api-access-tg7mt\") pod \"nova-cell1-db-create-cm58n\" (UID: \"040cc9fb-7f10-44a2-93f2-8249f45a9a59\") " pod="nova-kuttl-default/nova-cell1-db-create-cm58n" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.312050 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54ede2f-e6a9-495d-ae99-650d3465872f-operator-scripts\") pod \"nova-api-b196-account-create-update-tzbbp\" (UID: \"d54ede2f-e6a9-495d-ae99-650d3465872f\") " pod="nova-kuttl-default/nova-api-b196-account-create-update-tzbbp" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.312077 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dd9h\" (UniqueName: \"kubernetes.io/projected/f9c32b0d-83ad-46dd-b388-2723bde9de7f-kube-api-access-5dd9h\") pod \"nova-cell0-db-create-km65q\" (UID: \"f9c32b0d-83ad-46dd-b388-2723bde9de7f\") " pod="nova-kuttl-default/nova-cell0-db-create-km65q" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.312103 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnt7d\" (UniqueName: \"kubernetes.io/projected/d54ede2f-e6a9-495d-ae99-650d3465872f-kube-api-access-pnt7d\") pod \"nova-api-b196-account-create-update-tzbbp\" (UID: \"d54ede2f-e6a9-495d-ae99-650d3465872f\") " pod="nova-kuttl-default/nova-api-b196-account-create-update-tzbbp" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.312134 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9c32b0d-83ad-46dd-b388-2723bde9de7f-operator-scripts\") pod \"nova-cell0-db-create-km65q\" (UID: \"f9c32b0d-83ad-46dd-b388-2723bde9de7f\") " pod="nova-kuttl-default/nova-cell0-db-create-km65q" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.312804 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9c32b0d-83ad-46dd-b388-2723bde9de7f-operator-scripts\") pod \"nova-cell0-db-create-km65q\" (UID: \"f9c32b0d-83ad-46dd-b388-2723bde9de7f\") " pod="nova-kuttl-default/nova-cell0-db-create-km65q" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.329016 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-jqqcj" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.332995 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dd9h\" (UniqueName: \"kubernetes.io/projected/f9c32b0d-83ad-46dd-b388-2723bde9de7f-kube-api-access-5dd9h\") pod \"nova-cell0-db-create-km65q\" (UID: \"f9c32b0d-83ad-46dd-b388-2723bde9de7f\") " pod="nova-kuttl-default/nova-cell0-db-create-km65q" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.393018 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp"] Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.394009 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.396145 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.409918 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp"] Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.411320 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-km65q" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.415010 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/040cc9fb-7f10-44a2-93f2-8249f45a9a59-operator-scripts\") pod \"nova-cell1-db-create-cm58n\" (UID: \"040cc9fb-7f10-44a2-93f2-8249f45a9a59\") " pod="nova-kuttl-default/nova-cell1-db-create-cm58n" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.415057 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg7mt\" (UniqueName: \"kubernetes.io/projected/040cc9fb-7f10-44a2-93f2-8249f45a9a59-kube-api-access-tg7mt\") pod \"nova-cell1-db-create-cm58n\" (UID: \"040cc9fb-7f10-44a2-93f2-8249f45a9a59\") " pod="nova-kuttl-default/nova-cell1-db-create-cm58n" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.415090 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54ede2f-e6a9-495d-ae99-650d3465872f-operator-scripts\") pod \"nova-api-b196-account-create-update-tzbbp\" (UID: \"d54ede2f-e6a9-495d-ae99-650d3465872f\") " pod="nova-kuttl-default/nova-api-b196-account-create-update-tzbbp" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.415123 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnt7d\" (UniqueName: \"kubernetes.io/projected/d54ede2f-e6a9-495d-ae99-650d3465872f-kube-api-access-pnt7d\") pod \"nova-api-b196-account-create-update-tzbbp\" (UID: \"d54ede2f-e6a9-495d-ae99-650d3465872f\") " pod="nova-kuttl-default/nova-api-b196-account-create-update-tzbbp" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.415786 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/040cc9fb-7f10-44a2-93f2-8249f45a9a59-operator-scripts\") pod \"nova-cell1-db-create-cm58n\" (UID: \"040cc9fb-7f10-44a2-93f2-8249f45a9a59\") " pod="nova-kuttl-default/nova-cell1-db-create-cm58n" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.415968 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54ede2f-e6a9-495d-ae99-650d3465872f-operator-scripts\") pod \"nova-api-b196-account-create-update-tzbbp\" (UID: \"d54ede2f-e6a9-495d-ae99-650d3465872f\") " pod="nova-kuttl-default/nova-api-b196-account-create-update-tzbbp" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.441255 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg7mt\" (UniqueName: \"kubernetes.io/projected/040cc9fb-7f10-44a2-93f2-8249f45a9a59-kube-api-access-tg7mt\") pod \"nova-cell1-db-create-cm58n\" (UID: \"040cc9fb-7f10-44a2-93f2-8249f45a9a59\") " pod="nova-kuttl-default/nova-cell1-db-create-cm58n" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.441502 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnt7d\" (UniqueName: \"kubernetes.io/projected/d54ede2f-e6a9-495d-ae99-650d3465872f-kube-api-access-pnt7d\") pod \"nova-api-b196-account-create-update-tzbbp\" (UID: \"d54ede2f-e6a9-495d-ae99-650d3465872f\") " pod="nova-kuttl-default/nova-api-b196-account-create-update-tzbbp" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.504918 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-b196-account-create-update-tzbbp" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.516294 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/678eb9a6-5e7a-48f7-8843-bd3c3933b133-operator-scripts\") pod \"nova-cell0-4ccc-account-create-update-dpbzp\" (UID: \"678eb9a6-5e7a-48f7-8843-bd3c3933b133\") " pod="nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.516421 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzsvs\" (UniqueName: \"kubernetes.io/projected/678eb9a6-5e7a-48f7-8843-bd3c3933b133-kube-api-access-qzsvs\") pod \"nova-cell0-4ccc-account-create-update-dpbzp\" (UID: \"678eb9a6-5e7a-48f7-8843-bd3c3933b133\") " pod="nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.601885 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs"] Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.603442 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.606573 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.609445 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-cm58n" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.609568 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs"] Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.617667 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzsvs\" (UniqueName: \"kubernetes.io/projected/678eb9a6-5e7a-48f7-8843-bd3c3933b133-kube-api-access-qzsvs\") pod \"nova-cell0-4ccc-account-create-update-dpbzp\" (UID: \"678eb9a6-5e7a-48f7-8843-bd3c3933b133\") " pod="nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.617753 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/678eb9a6-5e7a-48f7-8843-bd3c3933b133-operator-scripts\") pod \"nova-cell0-4ccc-account-create-update-dpbzp\" (UID: \"678eb9a6-5e7a-48f7-8843-bd3c3933b133\") " pod="nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.618354 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/678eb9a6-5e7a-48f7-8843-bd3c3933b133-operator-scripts\") pod \"nova-cell0-4ccc-account-create-update-dpbzp\" (UID: \"678eb9a6-5e7a-48f7-8843-bd3c3933b133\") " pod="nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.641963 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzsvs\" (UniqueName: \"kubernetes.io/projected/678eb9a6-5e7a-48f7-8843-bd3c3933b133-kube-api-access-qzsvs\") pod \"nova-cell0-4ccc-account-create-update-dpbzp\" (UID: \"678eb9a6-5e7a-48f7-8843-bd3c3933b133\") " pod="nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.719865 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/232589d7-b1da-4835-9658-979713413d19-operator-scripts\") pod \"nova-cell1-935b-account-create-update-hpggs\" (UID: \"232589d7-b1da-4835-9658-979713413d19\") " pod="nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.719964 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54pz4\" (UniqueName: \"kubernetes.io/projected/232589d7-b1da-4835-9658-979713413d19-kube-api-access-54pz4\") pod \"nova-cell1-935b-account-create-update-hpggs\" (UID: \"232589d7-b1da-4835-9658-979713413d19\") " pod="nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.740324 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.821940 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54pz4\" (UniqueName: \"kubernetes.io/projected/232589d7-b1da-4835-9658-979713413d19-kube-api-access-54pz4\") pod \"nova-cell1-935b-account-create-update-hpggs\" (UID: \"232589d7-b1da-4835-9658-979713413d19\") " pod="nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.822138 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/232589d7-b1da-4835-9658-979713413d19-operator-scripts\") pod \"nova-cell1-935b-account-create-update-hpggs\" (UID: \"232589d7-b1da-4835-9658-979713413d19\") " pod="nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.824391 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/232589d7-b1da-4835-9658-979713413d19-operator-scripts\") pod \"nova-cell1-935b-account-create-update-hpggs\" (UID: \"232589d7-b1da-4835-9658-979713413d19\") " pod="nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.844681 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54pz4\" (UniqueName: \"kubernetes.io/projected/232589d7-b1da-4835-9658-979713413d19-kube-api-access-54pz4\") pod \"nova-cell1-935b-account-create-update-hpggs\" (UID: \"232589d7-b1da-4835-9658-979713413d19\") " pod="nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs" Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.849099 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-jqqcj"] Jan 27 13:36:34 crc kubenswrapper[4786]: W0127 13:36:34.854862 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod020d5622_86fc_434d_95af_a38096706001.slice/crio-20e00afc0874c51c68729f60e454e4a6edf6b39180b51e5ea65f54799c7eb289 WatchSource:0}: Error finding container 20e00afc0874c51c68729f60e454e4a6edf6b39180b51e5ea65f54799c7eb289: Status 404 returned error can't find the container with id 20e00afc0874c51c68729f60e454e4a6edf6b39180b51e5ea65f54799c7eb289 Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.932246 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-km65q"] Jan 27 13:36:34 crc kubenswrapper[4786]: I0127 13:36:34.948991 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs" Jan 27 13:36:35 crc kubenswrapper[4786]: I0127 13:36:35.048540 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-b196-account-create-update-tzbbp"] Jan 27 13:36:35 crc kubenswrapper[4786]: I0127 13:36:35.154077 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-cm58n"] Jan 27 13:36:35 crc kubenswrapper[4786]: I0127 13:36:35.223716 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp"] Jan 27 13:36:35 crc kubenswrapper[4786]: W0127 13:36:35.246171 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod678eb9a6_5e7a_48f7_8843_bd3c3933b133.slice/crio-a54bcf11b0c666b5bc98b003e769c26c0ee9665539925687f8cb54150b4969b6 WatchSource:0}: Error finding container a54bcf11b0c666b5bc98b003e769c26c0ee9665539925687f8cb54150b4969b6: Status 404 returned error can't find the container with id a54bcf11b0c666b5bc98b003e769c26c0ee9665539925687f8cb54150b4969b6 Jan 27 13:36:35 crc kubenswrapper[4786]: I0127 13:36:35.264803 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp" event={"ID":"678eb9a6-5e7a-48f7-8843-bd3c3933b133","Type":"ContainerStarted","Data":"a54bcf11b0c666b5bc98b003e769c26c0ee9665539925687f8cb54150b4969b6"} Jan 27 13:36:35 crc kubenswrapper[4786]: I0127 13:36:35.266172 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-cm58n" event={"ID":"040cc9fb-7f10-44a2-93f2-8249f45a9a59","Type":"ContainerStarted","Data":"be2609db48f450499387b5b45b24c47a51e88ec40843e89cb71b4f531d591c77"} Jan 27 13:36:35 crc kubenswrapper[4786]: I0127 13:36:35.268898 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-b196-account-create-update-tzbbp" event={"ID":"d54ede2f-e6a9-495d-ae99-650d3465872f","Type":"ContainerStarted","Data":"f695fbe538bb43fad1842572a2a1a58bc09a50d4d003d2e749bbfd23d386ccd4"} Jan 27 13:36:35 crc kubenswrapper[4786]: I0127 13:36:35.268965 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-b196-account-create-update-tzbbp" event={"ID":"d54ede2f-e6a9-495d-ae99-650d3465872f","Type":"ContainerStarted","Data":"f06deb4fed7564e0337ec6b1943e495a0044365cf4292454179f6d523272c9c2"} Jan 27 13:36:35 crc kubenswrapper[4786]: I0127 13:36:35.271710 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-jqqcj" event={"ID":"020d5622-86fc-434d-95af-a38096706001","Type":"ContainerStarted","Data":"7aa8402801ee2c611c7f5a6909110ec3bccafb09c39ffa1c45aa598b5bddd0c8"} Jan 27 13:36:35 crc kubenswrapper[4786]: I0127 13:36:35.271888 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-jqqcj" event={"ID":"020d5622-86fc-434d-95af-a38096706001","Type":"ContainerStarted","Data":"20e00afc0874c51c68729f60e454e4a6edf6b39180b51e5ea65f54799c7eb289"} Jan 27 13:36:35 crc kubenswrapper[4786]: I0127 13:36:35.274512 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-km65q" event={"ID":"f9c32b0d-83ad-46dd-b388-2723bde9de7f","Type":"ContainerStarted","Data":"15b4207e9e3b5f257f21099e65296e910375b9954a8c148f92cf62f9e6e7ae0e"} Jan 27 13:36:35 crc kubenswrapper[4786]: I0127 13:36:35.274551 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-km65q" event={"ID":"f9c32b0d-83ad-46dd-b388-2723bde9de7f","Type":"ContainerStarted","Data":"7fafa2af6ead313f7db8e40b56f7a0765367fe521f85d936a0f5ece918dfe081"} Jan 27 13:36:35 crc kubenswrapper[4786]: I0127 13:36:35.291856 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-api-b196-account-create-update-tzbbp" podStartSLOduration=1.291833953 podStartE2EDuration="1.291833953s" podCreationTimestamp="2026-01-27 13:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:36:35.283204197 +0000 UTC m=+1778.493818326" watchObservedRunningTime="2026-01-27 13:36:35.291833953 +0000 UTC m=+1778.502448072" Jan 27 13:36:35 crc kubenswrapper[4786]: I0127 13:36:35.315662 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell0-db-create-km65q" podStartSLOduration=1.3156452650000001 podStartE2EDuration="1.315645265s" podCreationTimestamp="2026-01-27 13:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:36:35.307729318 +0000 UTC m=+1778.518343447" watchObservedRunningTime="2026-01-27 13:36:35.315645265 +0000 UTC m=+1778.526259384" Jan 27 13:36:35 crc kubenswrapper[4786]: I0127 13:36:35.431688 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs"] Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.283988 4786 generic.go:334] "Generic (PLEG): container finished" podID="f9c32b0d-83ad-46dd-b388-2723bde9de7f" containerID="15b4207e9e3b5f257f21099e65296e910375b9954a8c148f92cf62f9e6e7ae0e" exitCode=0 Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.284086 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-km65q" event={"ID":"f9c32b0d-83ad-46dd-b388-2723bde9de7f","Type":"ContainerDied","Data":"15b4207e9e3b5f257f21099e65296e910375b9954a8c148f92cf62f9e6e7ae0e"} Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.285766 4786 generic.go:334] "Generic (PLEG): container finished" podID="232589d7-b1da-4835-9658-979713413d19" containerID="a72ad032f6e5e09bb494e8d26bcb346de2e4f33e093eed9843c72401c8827068" exitCode=0 Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.285846 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs" event={"ID":"232589d7-b1da-4835-9658-979713413d19","Type":"ContainerDied","Data":"a72ad032f6e5e09bb494e8d26bcb346de2e4f33e093eed9843c72401c8827068"} Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.285879 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs" event={"ID":"232589d7-b1da-4835-9658-979713413d19","Type":"ContainerStarted","Data":"bdc4fc3b8b11bee631ba7eb8e90c905db3f10d4c7bbc5ef668251b7e7f04b7f2"} Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.287673 4786 generic.go:334] "Generic (PLEG): container finished" podID="678eb9a6-5e7a-48f7-8843-bd3c3933b133" containerID="73f055031c91c0e12e194e0d68406eda35e16291a14a4d3c558be1c9ff4b3cc2" exitCode=0 Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.287755 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp" event={"ID":"678eb9a6-5e7a-48f7-8843-bd3c3933b133","Type":"ContainerDied","Data":"73f055031c91c0e12e194e0d68406eda35e16291a14a4d3c558be1c9ff4b3cc2"} Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.289642 4786 generic.go:334] "Generic (PLEG): container finished" podID="040cc9fb-7f10-44a2-93f2-8249f45a9a59" containerID="b2cad2ce810ffa2641f2961cbc00d46bcfca6d63116d9f257f1e11e6d1423c93" exitCode=0 Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.289758 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-cm58n" event={"ID":"040cc9fb-7f10-44a2-93f2-8249f45a9a59","Type":"ContainerDied","Data":"b2cad2ce810ffa2641f2961cbc00d46bcfca6d63116d9f257f1e11e6d1423c93"} Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.292546 4786 generic.go:334] "Generic (PLEG): container finished" podID="d54ede2f-e6a9-495d-ae99-650d3465872f" containerID="f695fbe538bb43fad1842572a2a1a58bc09a50d4d003d2e749bbfd23d386ccd4" exitCode=0 Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.292645 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-b196-account-create-update-tzbbp" event={"ID":"d54ede2f-e6a9-495d-ae99-650d3465872f","Type":"ContainerDied","Data":"f695fbe538bb43fad1842572a2a1a58bc09a50d4d003d2e749bbfd23d386ccd4"} Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.293914 4786 generic.go:334] "Generic (PLEG): container finished" podID="020d5622-86fc-434d-95af-a38096706001" containerID="7aa8402801ee2c611c7f5a6909110ec3bccafb09c39ffa1c45aa598b5bddd0c8" exitCode=0 Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.293948 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-jqqcj" event={"ID":"020d5622-86fc-434d-95af-a38096706001","Type":"ContainerDied","Data":"7aa8402801ee2c611c7f5a6909110ec3bccafb09c39ffa1c45aa598b5bddd0c8"} Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.612184 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-jqqcj" Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.649413 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/020d5622-86fc-434d-95af-a38096706001-operator-scripts\") pod \"020d5622-86fc-434d-95af-a38096706001\" (UID: \"020d5622-86fc-434d-95af-a38096706001\") " Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.649742 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlthg\" (UniqueName: \"kubernetes.io/projected/020d5622-86fc-434d-95af-a38096706001-kube-api-access-nlthg\") pod \"020d5622-86fc-434d-95af-a38096706001\" (UID: \"020d5622-86fc-434d-95af-a38096706001\") " Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.650186 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020d5622-86fc-434d-95af-a38096706001-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "020d5622-86fc-434d-95af-a38096706001" (UID: "020d5622-86fc-434d-95af-a38096706001"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.650590 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/020d5622-86fc-434d-95af-a38096706001-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.654924 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/020d5622-86fc-434d-95af-a38096706001-kube-api-access-nlthg" (OuterVolumeSpecName: "kube-api-access-nlthg") pod "020d5622-86fc-434d-95af-a38096706001" (UID: "020d5622-86fc-434d-95af-a38096706001"). InnerVolumeSpecName "kube-api-access-nlthg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:36 crc kubenswrapper[4786]: I0127 13:36:36.752515 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlthg\" (UniqueName: \"kubernetes.io/projected/020d5622-86fc-434d-95af-a38096706001-kube-api-access-nlthg\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.304884 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-jqqcj" event={"ID":"020d5622-86fc-434d-95af-a38096706001","Type":"ContainerDied","Data":"20e00afc0874c51c68729f60e454e4a6edf6b39180b51e5ea65f54799c7eb289"} Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.304926 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20e00afc0874c51c68729f60e454e4a6edf6b39180b51e5ea65f54799c7eb289" Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.305723 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-jqqcj" Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.670802 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-cm58n" Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.815953 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp" Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.816490 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-b196-account-create-update-tzbbp" Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.834715 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg7mt\" (UniqueName: \"kubernetes.io/projected/040cc9fb-7f10-44a2-93f2-8249f45a9a59-kube-api-access-tg7mt\") pod \"040cc9fb-7f10-44a2-93f2-8249f45a9a59\" (UID: \"040cc9fb-7f10-44a2-93f2-8249f45a9a59\") " Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.834823 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/040cc9fb-7f10-44a2-93f2-8249f45a9a59-operator-scripts\") pod \"040cc9fb-7f10-44a2-93f2-8249f45a9a59\" (UID: \"040cc9fb-7f10-44a2-93f2-8249f45a9a59\") " Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.835515 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/040cc9fb-7f10-44a2-93f2-8249f45a9a59-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "040cc9fb-7f10-44a2-93f2-8249f45a9a59" (UID: "040cc9fb-7f10-44a2-93f2-8249f45a9a59"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.841663 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/040cc9fb-7f10-44a2-93f2-8249f45a9a59-kube-api-access-tg7mt" (OuterVolumeSpecName: "kube-api-access-tg7mt") pod "040cc9fb-7f10-44a2-93f2-8249f45a9a59" (UID: "040cc9fb-7f10-44a2-93f2-8249f45a9a59"). InnerVolumeSpecName "kube-api-access-tg7mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.890925 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-km65q" Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.896743 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs" Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.936650 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54ede2f-e6a9-495d-ae99-650d3465872f-operator-scripts\") pod \"d54ede2f-e6a9-495d-ae99-650d3465872f\" (UID: \"d54ede2f-e6a9-495d-ae99-650d3465872f\") " Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.936702 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/678eb9a6-5e7a-48f7-8843-bd3c3933b133-operator-scripts\") pod \"678eb9a6-5e7a-48f7-8843-bd3c3933b133\" (UID: \"678eb9a6-5e7a-48f7-8843-bd3c3933b133\") " Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.936816 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnt7d\" (UniqueName: \"kubernetes.io/projected/d54ede2f-e6a9-495d-ae99-650d3465872f-kube-api-access-pnt7d\") pod \"d54ede2f-e6a9-495d-ae99-650d3465872f\" (UID: \"d54ede2f-e6a9-495d-ae99-650d3465872f\") " Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.936856 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzsvs\" (UniqueName: \"kubernetes.io/projected/678eb9a6-5e7a-48f7-8843-bd3c3933b133-kube-api-access-qzsvs\") pod \"678eb9a6-5e7a-48f7-8843-bd3c3933b133\" (UID: \"678eb9a6-5e7a-48f7-8843-bd3c3933b133\") " Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.937114 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d54ede2f-e6a9-495d-ae99-650d3465872f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d54ede2f-e6a9-495d-ae99-650d3465872f" (UID: "d54ede2f-e6a9-495d-ae99-650d3465872f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.937337 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg7mt\" (UniqueName: \"kubernetes.io/projected/040cc9fb-7f10-44a2-93f2-8249f45a9a59-kube-api-access-tg7mt\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.937355 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54ede2f-e6a9-495d-ae99-650d3465872f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.937363 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/040cc9fb-7f10-44a2-93f2-8249f45a9a59-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.937654 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/678eb9a6-5e7a-48f7-8843-bd3c3933b133-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "678eb9a6-5e7a-48f7-8843-bd3c3933b133" (UID: "678eb9a6-5e7a-48f7-8843-bd3c3933b133"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.940011 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678eb9a6-5e7a-48f7-8843-bd3c3933b133-kube-api-access-qzsvs" (OuterVolumeSpecName: "kube-api-access-qzsvs") pod "678eb9a6-5e7a-48f7-8843-bd3c3933b133" (UID: "678eb9a6-5e7a-48f7-8843-bd3c3933b133"). InnerVolumeSpecName "kube-api-access-qzsvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:37 crc kubenswrapper[4786]: I0127 13:36:37.940197 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d54ede2f-e6a9-495d-ae99-650d3465872f-kube-api-access-pnt7d" (OuterVolumeSpecName: "kube-api-access-pnt7d") pod "d54ede2f-e6a9-495d-ae99-650d3465872f" (UID: "d54ede2f-e6a9-495d-ae99-650d3465872f"). InnerVolumeSpecName "kube-api-access-pnt7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.038440 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54pz4\" (UniqueName: \"kubernetes.io/projected/232589d7-b1da-4835-9658-979713413d19-kube-api-access-54pz4\") pod \"232589d7-b1da-4835-9658-979713413d19\" (UID: \"232589d7-b1da-4835-9658-979713413d19\") " Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.038529 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dd9h\" (UniqueName: \"kubernetes.io/projected/f9c32b0d-83ad-46dd-b388-2723bde9de7f-kube-api-access-5dd9h\") pod \"f9c32b0d-83ad-46dd-b388-2723bde9de7f\" (UID: \"f9c32b0d-83ad-46dd-b388-2723bde9de7f\") " Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.038595 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/232589d7-b1da-4835-9658-979713413d19-operator-scripts\") pod \"232589d7-b1da-4835-9658-979713413d19\" (UID: \"232589d7-b1da-4835-9658-979713413d19\") " Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.038658 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9c32b0d-83ad-46dd-b388-2723bde9de7f-operator-scripts\") pod \"f9c32b0d-83ad-46dd-b388-2723bde9de7f\" (UID: \"f9c32b0d-83ad-46dd-b388-2723bde9de7f\") " Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.038960 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/678eb9a6-5e7a-48f7-8843-bd3c3933b133-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.038979 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnt7d\" (UniqueName: \"kubernetes.io/projected/d54ede2f-e6a9-495d-ae99-650d3465872f-kube-api-access-pnt7d\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.038990 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzsvs\" (UniqueName: \"kubernetes.io/projected/678eb9a6-5e7a-48f7-8843-bd3c3933b133-kube-api-access-qzsvs\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.039318 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/232589d7-b1da-4835-9658-979713413d19-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "232589d7-b1da-4835-9658-979713413d19" (UID: "232589d7-b1da-4835-9658-979713413d19"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.039331 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9c32b0d-83ad-46dd-b388-2723bde9de7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9c32b0d-83ad-46dd-b388-2723bde9de7f" (UID: "f9c32b0d-83ad-46dd-b388-2723bde9de7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.041340 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c32b0d-83ad-46dd-b388-2723bde9de7f-kube-api-access-5dd9h" (OuterVolumeSpecName: "kube-api-access-5dd9h") pod "f9c32b0d-83ad-46dd-b388-2723bde9de7f" (UID: "f9c32b0d-83ad-46dd-b388-2723bde9de7f"). InnerVolumeSpecName "kube-api-access-5dd9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.041834 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/232589d7-b1da-4835-9658-979713413d19-kube-api-access-54pz4" (OuterVolumeSpecName: "kube-api-access-54pz4") pod "232589d7-b1da-4835-9658-979713413d19" (UID: "232589d7-b1da-4835-9658-979713413d19"). InnerVolumeSpecName "kube-api-access-54pz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.140826 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dd9h\" (UniqueName: \"kubernetes.io/projected/f9c32b0d-83ad-46dd-b388-2723bde9de7f-kube-api-access-5dd9h\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.141037 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/232589d7-b1da-4835-9658-979713413d19-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.141099 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9c32b0d-83ad-46dd-b388-2723bde9de7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.141182 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54pz4\" (UniqueName: \"kubernetes.io/projected/232589d7-b1da-4835-9658-979713413d19-kube-api-access-54pz4\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.317873 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-km65q" event={"ID":"f9c32b0d-83ad-46dd-b388-2723bde9de7f","Type":"ContainerDied","Data":"7fafa2af6ead313f7db8e40b56f7a0765367fe521f85d936a0f5ece918dfe081"} Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.317946 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fafa2af6ead313f7db8e40b56f7a0765367fe521f85d936a0f5ece918dfe081" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.318007 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-km65q" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.321188 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs" event={"ID":"232589d7-b1da-4835-9658-979713413d19","Type":"ContainerDied","Data":"bdc4fc3b8b11bee631ba7eb8e90c905db3f10d4c7bbc5ef668251b7e7f04b7f2"} Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.321304 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdc4fc3b8b11bee631ba7eb8e90c905db3f10d4c7bbc5ef668251b7e7f04b7f2" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.321200 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.323122 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp" event={"ID":"678eb9a6-5e7a-48f7-8843-bd3c3933b133","Type":"ContainerDied","Data":"a54bcf11b0c666b5bc98b003e769c26c0ee9665539925687f8cb54150b4969b6"} Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.323151 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.323159 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a54bcf11b0c666b5bc98b003e769c26c0ee9665539925687f8cb54150b4969b6" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.324585 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-cm58n" event={"ID":"040cc9fb-7f10-44a2-93f2-8249f45a9a59","Type":"ContainerDied","Data":"be2609db48f450499387b5b45b24c47a51e88ec40843e89cb71b4f531d591c77"} Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.324631 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be2609db48f450499387b5b45b24c47a51e88ec40843e89cb71b4f531d591c77" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.324662 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-cm58n" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.327968 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-b196-account-create-update-tzbbp" Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.327959 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-b196-account-create-update-tzbbp" event={"ID":"d54ede2f-e6a9-495d-ae99-650d3465872f","Type":"ContainerDied","Data":"f06deb4fed7564e0337ec6b1943e495a0044365cf4292454179f6d523272c9c2"} Jan 27 13:36:38 crc kubenswrapper[4786]: I0127 13:36:38.328083 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f06deb4fed7564e0337ec6b1943e495a0044365cf4292454179f6d523272c9c2" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.602210 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7"] Jan 27 13:36:39 crc kubenswrapper[4786]: E0127 13:36:39.602909 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="020d5622-86fc-434d-95af-a38096706001" containerName="mariadb-database-create" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.602925 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="020d5622-86fc-434d-95af-a38096706001" containerName="mariadb-database-create" Jan 27 13:36:39 crc kubenswrapper[4786]: E0127 13:36:39.602952 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040cc9fb-7f10-44a2-93f2-8249f45a9a59" containerName="mariadb-database-create" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.602959 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="040cc9fb-7f10-44a2-93f2-8249f45a9a59" containerName="mariadb-database-create" Jan 27 13:36:39 crc kubenswrapper[4786]: E0127 13:36:39.602972 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54ede2f-e6a9-495d-ae99-650d3465872f" containerName="mariadb-account-create-update" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.602980 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54ede2f-e6a9-495d-ae99-650d3465872f" containerName="mariadb-account-create-update" Jan 27 13:36:39 crc kubenswrapper[4786]: E0127 13:36:39.602990 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678eb9a6-5e7a-48f7-8843-bd3c3933b133" containerName="mariadb-account-create-update" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.602998 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="678eb9a6-5e7a-48f7-8843-bd3c3933b133" containerName="mariadb-account-create-update" Jan 27 13:36:39 crc kubenswrapper[4786]: E0127 13:36:39.603008 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232589d7-b1da-4835-9658-979713413d19" containerName="mariadb-account-create-update" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.603016 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="232589d7-b1da-4835-9658-979713413d19" containerName="mariadb-account-create-update" Jan 27 13:36:39 crc kubenswrapper[4786]: E0127 13:36:39.603027 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c32b0d-83ad-46dd-b388-2723bde9de7f" containerName="mariadb-database-create" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.603034 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c32b0d-83ad-46dd-b388-2723bde9de7f" containerName="mariadb-database-create" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.603205 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="040cc9fb-7f10-44a2-93f2-8249f45a9a59" containerName="mariadb-database-create" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.603218 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="232589d7-b1da-4835-9658-979713413d19" containerName="mariadb-account-create-update" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.603234 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c32b0d-83ad-46dd-b388-2723bde9de7f" containerName="mariadb-database-create" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.603245 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="678eb9a6-5e7a-48f7-8843-bd3c3933b133" containerName="mariadb-account-create-update" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.603256 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="d54ede2f-e6a9-495d-ae99-650d3465872f" containerName="mariadb-account-create-update" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.603270 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="020d5622-86fc-434d-95af-a38096706001" containerName="mariadb-database-create" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.604430 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.606545 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-7v8b9" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.607196 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.619168 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.620155 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7"] Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.764952 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9rgt\" (UniqueName: \"kubernetes.io/projected/8d5d577e-adfd-4726-9fda-627da7dff544-kube-api-access-x9rgt\") pod \"nova-kuttl-cell0-conductor-db-sync-k6rb7\" (UID: \"8d5d577e-adfd-4726-9fda-627da7dff544\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.765016 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d5d577e-adfd-4726-9fda-627da7dff544-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-k6rb7\" (UID: \"8d5d577e-adfd-4726-9fda-627da7dff544\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.765045 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d5d577e-adfd-4726-9fda-627da7dff544-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-k6rb7\" (UID: \"8d5d577e-adfd-4726-9fda-627da7dff544\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.866106 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d5d577e-adfd-4726-9fda-627da7dff544-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-k6rb7\" (UID: \"8d5d577e-adfd-4726-9fda-627da7dff544\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.866282 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9rgt\" (UniqueName: \"kubernetes.io/projected/8d5d577e-adfd-4726-9fda-627da7dff544-kube-api-access-x9rgt\") pod \"nova-kuttl-cell0-conductor-db-sync-k6rb7\" (UID: \"8d5d577e-adfd-4726-9fda-627da7dff544\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.866350 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d5d577e-adfd-4726-9fda-627da7dff544-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-k6rb7\" (UID: \"8d5d577e-adfd-4726-9fda-627da7dff544\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.870883 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d5d577e-adfd-4726-9fda-627da7dff544-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-k6rb7\" (UID: \"8d5d577e-adfd-4726-9fda-627da7dff544\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.884409 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d5d577e-adfd-4726-9fda-627da7dff544-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-k6rb7\" (UID: \"8d5d577e-adfd-4726-9fda-627da7dff544\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.890591 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9rgt\" (UniqueName: \"kubernetes.io/projected/8d5d577e-adfd-4726-9fda-627da7dff544-kube-api-access-x9rgt\") pod \"nova-kuttl-cell0-conductor-db-sync-k6rb7\" (UID: \"8d5d577e-adfd-4726-9fda-627da7dff544\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.908572 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.909675 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.911866 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-compute-fake1-compute-config-data" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.933147 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq"] Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.934523 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.967766 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.967863 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.968158 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.984329 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 27 13:36:39 crc kubenswrapper[4786]: I0127 13:36:39.997677 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq"] Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.066662 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.068002 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.071350 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/109e73cd-4c9d-4841-9e95-950166f5cda0-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-8cdhq\" (UID: \"109e73cd-4c9d-4841-9e95-950166f5cda0\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.071404 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhxj2\" (UniqueName: \"kubernetes.io/projected/109e73cd-4c9d-4841-9e95-950166f5cda0-kube-api-access-bhxj2\") pod \"nova-kuttl-cell1-conductor-db-sync-8cdhq\" (UID: \"109e73cd-4c9d-4841-9e95-950166f5cda0\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.071591 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59b5bdb-15fd-4a34-bdcc-1b64a3795f10-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"b59b5bdb-15fd-4a34-bdcc-1b64a3795f10\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.071699 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/109e73cd-4c9d-4841-9e95-950166f5cda0-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-8cdhq\" (UID: \"109e73cd-4c9d-4841-9e95-950166f5cda0\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.071731 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbmjr\" (UniqueName: \"kubernetes.io/projected/b59b5bdb-15fd-4a34-bdcc-1b64a3795f10-kube-api-access-xbmjr\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"b59b5bdb-15fd-4a34-bdcc-1b64a3795f10\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.071781 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.074529 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.176428 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3baae91-6726-4018-ac0c-7036d4227441-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"f3baae91-6726-4018-ac0c-7036d4227441\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.176527 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlp8m\" (UniqueName: \"kubernetes.io/projected/f3baae91-6726-4018-ac0c-7036d4227441-kube-api-access-mlp8m\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"f3baae91-6726-4018-ac0c-7036d4227441\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.176574 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/109e73cd-4c9d-4841-9e95-950166f5cda0-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-8cdhq\" (UID: \"109e73cd-4c9d-4841-9e95-950166f5cda0\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.176769 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhxj2\" (UniqueName: \"kubernetes.io/projected/109e73cd-4c9d-4841-9e95-950166f5cda0-kube-api-access-bhxj2\") pod \"nova-kuttl-cell1-conductor-db-sync-8cdhq\" (UID: \"109e73cd-4c9d-4841-9e95-950166f5cda0\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.176850 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59b5bdb-15fd-4a34-bdcc-1b64a3795f10-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"b59b5bdb-15fd-4a34-bdcc-1b64a3795f10\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.176895 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/109e73cd-4c9d-4841-9e95-950166f5cda0-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-8cdhq\" (UID: \"109e73cd-4c9d-4841-9e95-950166f5cda0\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.176920 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbmjr\" (UniqueName: \"kubernetes.io/projected/b59b5bdb-15fd-4a34-bdcc-1b64a3795f10-kube-api-access-xbmjr\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"b59b5bdb-15fd-4a34-bdcc-1b64a3795f10\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.182207 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/109e73cd-4c9d-4841-9e95-950166f5cda0-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-8cdhq\" (UID: \"109e73cd-4c9d-4841-9e95-950166f5cda0\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.190336 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59b5bdb-15fd-4a34-bdcc-1b64a3795f10-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"b59b5bdb-15fd-4a34-bdcc-1b64a3795f10\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.202159 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbmjr\" (UniqueName: \"kubernetes.io/projected/b59b5bdb-15fd-4a34-bdcc-1b64a3795f10-kube-api-access-xbmjr\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"b59b5bdb-15fd-4a34-bdcc-1b64a3795f10\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.202835 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/109e73cd-4c9d-4841-9e95-950166f5cda0-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-8cdhq\" (UID: \"109e73cd-4c9d-4841-9e95-950166f5cda0\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.212346 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhxj2\" (UniqueName: \"kubernetes.io/projected/109e73cd-4c9d-4841-9e95-950166f5cda0-kube-api-access-bhxj2\") pod \"nova-kuttl-cell1-conductor-db-sync-8cdhq\" (UID: \"109e73cd-4c9d-4841-9e95-950166f5cda0\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.281117 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3baae91-6726-4018-ac0c-7036d4227441-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"f3baae91-6726-4018-ac0c-7036d4227441\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.281242 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlp8m\" (UniqueName: \"kubernetes.io/projected/f3baae91-6726-4018-ac0c-7036d4227441-kube-api-access-mlp8m\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"f3baae91-6726-4018-ac0c-7036d4227441\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.285089 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3baae91-6726-4018-ac0c-7036d4227441-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"f3baae91-6726-4018-ac0c-7036d4227441\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.297157 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlp8m\" (UniqueName: \"kubernetes.io/projected/f3baae91-6726-4018-ac0c-7036d4227441-kube-api-access-mlp8m\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"f3baae91-6726-4018-ac0c-7036d4227441\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.358207 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.378563 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.387257 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.494298 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7"] Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.835910 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 27 13:36:40 crc kubenswrapper[4786]: W0127 13:36:40.836053 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb59b5bdb_15fd_4a34_bdcc_1b64a3795f10.slice/crio-2a04f7a22406ab1b29bd035bb62180e9d8da88d6e277ff1d653bac3c06e2e213 WatchSource:0}: Error finding container 2a04f7a22406ab1b29bd035bb62180e9d8da88d6e277ff1d653bac3c06e2e213: Status 404 returned error can't find the container with id 2a04f7a22406ab1b29bd035bb62180e9d8da88d6e277ff1d653bac3c06e2e213 Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.838764 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 13:36:40 crc kubenswrapper[4786]: W0127 13:36:40.847816 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3baae91_6726_4018_ac0c_7036d4227441.slice/crio-8fb288ec5bbb53204da3e9b98604a5259e505465e150e02edd047e90525b378e WatchSource:0}: Error finding container 8fb288ec5bbb53204da3e9b98604a5259e505465e150e02edd047e90525b378e: Status 404 returned error can't find the container with id 8fb288ec5bbb53204da3e9b98604a5259e505465e150e02edd047e90525b378e Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.848232 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:36:40 crc kubenswrapper[4786]: W0127 13:36:40.866381 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod109e73cd_4c9d_4841_9e95_950166f5cda0.slice/crio-d6416d1cd89f3fbababc432ca492fa03f1e3b4307c444aa73ef43a569669d625 WatchSource:0}: Error finding container d6416d1cd89f3fbababc432ca492fa03f1e3b4307c444aa73ef43a569669d625: Status 404 returned error can't find the container with id d6416d1cd89f3fbababc432ca492fa03f1e3b4307c444aa73ef43a569669d625 Jan 27 13:36:40 crc kubenswrapper[4786]: I0127 13:36:40.880326 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq"] Jan 27 13:36:41 crc kubenswrapper[4786]: I0127 13:36:41.350211 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"f3baae91-6726-4018-ac0c-7036d4227441","Type":"ContainerStarted","Data":"8fb288ec5bbb53204da3e9b98604a5259e505465e150e02edd047e90525b378e"} Jan 27 13:36:41 crc kubenswrapper[4786]: I0127 13:36:41.351755 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" event={"ID":"8d5d577e-adfd-4726-9fda-627da7dff544","Type":"ContainerStarted","Data":"fa7db149003041de3e3a23fdc5facff65add4a718a6790f91e98e7d130311545"} Jan 27 13:36:41 crc kubenswrapper[4786]: I0127 13:36:41.353364 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" event={"ID":"109e73cd-4c9d-4841-9e95-950166f5cda0","Type":"ContainerStarted","Data":"d6416d1cd89f3fbababc432ca492fa03f1e3b4307c444aa73ef43a569669d625"} Jan 27 13:36:41 crc kubenswrapper[4786]: I0127 13:36:41.354650 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"b59b5bdb-15fd-4a34-bdcc-1b64a3795f10","Type":"ContainerStarted","Data":"2a04f7a22406ab1b29bd035bb62180e9d8da88d6e277ff1d653bac3c06e2e213"} Jan 27 13:36:42 crc kubenswrapper[4786]: I0127 13:36:42.052845 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-1281-account-create-update-kmk5c"] Jan 27 13:36:42 crc kubenswrapper[4786]: I0127 13:36:42.062645 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-create-xhxd7"] Jan 27 13:36:42 crc kubenswrapper[4786]: I0127 13:36:42.071247 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-create-grv59"] Jan 27 13:36:42 crc kubenswrapper[4786]: I0127 13:36:42.079428 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-390c-account-create-update-74l8w"] Jan 27 13:36:42 crc kubenswrapper[4786]: I0127 13:36:42.087415 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-create-xhxd7"] Jan 27 13:36:42 crc kubenswrapper[4786]: I0127 13:36:42.095042 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-1281-account-create-update-kmk5c"] Jan 27 13:36:42 crc kubenswrapper[4786]: I0127 13:36:42.102195 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-390c-account-create-update-74l8w"] Jan 27 13:36:42 crc kubenswrapper[4786]: I0127 13:36:42.107469 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-create-grv59"] Jan 27 13:36:43 crc kubenswrapper[4786]: I0127 13:36:43.465457 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:36:43 crc kubenswrapper[4786]: E0127 13:36:43.465818 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:36:43 crc kubenswrapper[4786]: I0127 13:36:43.473472 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e22f5-8034-4f38-a93d-077d430c969e" path="/var/lib/kubelet/pods/301e22f5-8034-4f38-a93d-077d430c969e/volumes" Jan 27 13:36:43 crc kubenswrapper[4786]: I0127 13:36:43.474193 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63d091d7-88cf-41d7-8ae2-efc780052648" path="/var/lib/kubelet/pods/63d091d7-88cf-41d7-8ae2-efc780052648/volumes" Jan 27 13:36:43 crc kubenswrapper[4786]: I0127 13:36:43.474688 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92ed6245-6eb9-4e9c-aca7-3b0d9d205639" path="/var/lib/kubelet/pods/92ed6245-6eb9-4e9c-aca7-3b0d9d205639/volumes" Jan 27 13:36:43 crc kubenswrapper[4786]: I0127 13:36:43.475153 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad879844-692e-4c55-8557-499c93a029c2" path="/var/lib/kubelet/pods/ad879844-692e-4c55-8557-499c93a029c2/volumes" Jan 27 13:36:47 crc kubenswrapper[4786]: I0127 13:36:47.415023 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" event={"ID":"109e73cd-4c9d-4841-9e95-950166f5cda0","Type":"ContainerStarted","Data":"5d40b8ee11a709c6f836bdee33d7d12b9c786b349ca313b826c8d031140d94b7"} Jan 27 13:36:47 crc kubenswrapper[4786]: I0127 13:36:47.419263 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"f3baae91-6726-4018-ac0c-7036d4227441","Type":"ContainerStarted","Data":"e2d577b04a27cc87c7e9af3607c2886f80bd62473062d2658c9495af0b9702a8"} Jan 27 13:36:47 crc kubenswrapper[4786]: I0127 13:36:47.421477 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" event={"ID":"8d5d577e-adfd-4726-9fda-627da7dff544","Type":"ContainerStarted","Data":"419e5894d5218d7fbfb6f86d65d3618a76b709e1a328f8cdfe1b1406bd74712d"} Jan 27 13:36:47 crc kubenswrapper[4786]: I0127 13:36:47.441014 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" podStartSLOduration=8.440999009 podStartE2EDuration="8.440999009s" podCreationTimestamp="2026-01-27 13:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:36:47.43554091 +0000 UTC m=+1790.646155029" watchObservedRunningTime="2026-01-27 13:36:47.440999009 +0000 UTC m=+1790.651613128" Jan 27 13:36:47 crc kubenswrapper[4786]: I0127 13:36:47.452036 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=7.45201764 podStartE2EDuration="7.45201764s" podCreationTimestamp="2026-01-27 13:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:36:47.447056524 +0000 UTC m=+1790.657670643" watchObservedRunningTime="2026-01-27 13:36:47.45201764 +0000 UTC m=+1790.662631759" Jan 27 13:36:47 crc kubenswrapper[4786]: I0127 13:36:47.467018 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" podStartSLOduration=8.467002150999999 podStartE2EDuration="8.467002151s" podCreationTimestamp="2026-01-27 13:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:36:47.464975745 +0000 UTC m=+1790.675589864" watchObservedRunningTime="2026-01-27 13:36:47.467002151 +0000 UTC m=+1790.677616270" Jan 27 13:36:50 crc kubenswrapper[4786]: I0127 13:36:50.387824 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:36:50 crc kubenswrapper[4786]: I0127 13:36:50.389246 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:36:50 crc kubenswrapper[4786]: I0127 13:36:50.402030 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:36:50 crc kubenswrapper[4786]: I0127 13:36:50.465967 4786 generic.go:334] "Generic (PLEG): container finished" podID="109e73cd-4c9d-4841-9e95-950166f5cda0" containerID="5d40b8ee11a709c6f836bdee33d7d12b9c786b349ca313b826c8d031140d94b7" exitCode=0 Jan 27 13:36:50 crc kubenswrapper[4786]: I0127 13:36:50.466066 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" event={"ID":"109e73cd-4c9d-4841-9e95-950166f5cda0","Type":"ContainerDied","Data":"5d40b8ee11a709c6f836bdee33d7d12b9c786b349ca313b826c8d031140d94b7"} Jan 27 13:36:50 crc kubenswrapper[4786]: I0127 13:36:50.477397 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:36:52 crc kubenswrapper[4786]: I0127 13:36:52.038378 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-7wz57"] Jan 27 13:36:52 crc kubenswrapper[4786]: I0127 13:36:52.046439 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-7wz57"] Jan 27 13:36:52 crc kubenswrapper[4786]: I0127 13:36:52.495892 4786 generic.go:334] "Generic (PLEG): container finished" podID="8d5d577e-adfd-4726-9fda-627da7dff544" containerID="419e5894d5218d7fbfb6f86d65d3618a76b709e1a328f8cdfe1b1406bd74712d" exitCode=0 Jan 27 13:36:52 crc kubenswrapper[4786]: I0127 13:36:52.496906 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" event={"ID":"8d5d577e-adfd-4726-9fda-627da7dff544","Type":"ContainerDied","Data":"419e5894d5218d7fbfb6f86d65d3618a76b709e1a328f8cdfe1b1406bd74712d"} Jan 27 13:36:53 crc kubenswrapper[4786]: I0127 13:36:53.474977 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98bca412-7a6c-4bd1-b4d4-2665efe925e4" path="/var/lib/kubelet/pods/98bca412-7a6c-4bd1-b4d4-2665efe925e4/volumes" Jan 27 13:36:54 crc kubenswrapper[4786]: I0127 13:36:54.465871 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:36:54 crc kubenswrapper[4786]: E0127 13:36:54.466235 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.755454 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.764413 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.775547 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/109e73cd-4c9d-4841-9e95-950166f5cda0-scripts\") pod \"109e73cd-4c9d-4841-9e95-950166f5cda0\" (UID: \"109e73cd-4c9d-4841-9e95-950166f5cda0\") " Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.775890 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhxj2\" (UniqueName: \"kubernetes.io/projected/109e73cd-4c9d-4841-9e95-950166f5cda0-kube-api-access-bhxj2\") pod \"109e73cd-4c9d-4841-9e95-950166f5cda0\" (UID: \"109e73cd-4c9d-4841-9e95-950166f5cda0\") " Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.776000 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/109e73cd-4c9d-4841-9e95-950166f5cda0-config-data\") pod \"109e73cd-4c9d-4841-9e95-950166f5cda0\" (UID: \"109e73cd-4c9d-4841-9e95-950166f5cda0\") " Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.776119 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d5d577e-adfd-4726-9fda-627da7dff544-config-data\") pod \"8d5d577e-adfd-4726-9fda-627da7dff544\" (UID: \"8d5d577e-adfd-4726-9fda-627da7dff544\") " Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.776241 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9rgt\" (UniqueName: \"kubernetes.io/projected/8d5d577e-adfd-4726-9fda-627da7dff544-kube-api-access-x9rgt\") pod \"8d5d577e-adfd-4726-9fda-627da7dff544\" (UID: \"8d5d577e-adfd-4726-9fda-627da7dff544\") " Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.776780 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d5d577e-adfd-4726-9fda-627da7dff544-scripts\") pod \"8d5d577e-adfd-4726-9fda-627da7dff544\" (UID: \"8d5d577e-adfd-4726-9fda-627da7dff544\") " Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.784755 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d5d577e-adfd-4726-9fda-627da7dff544-scripts" (OuterVolumeSpecName: "scripts") pod "8d5d577e-adfd-4726-9fda-627da7dff544" (UID: "8d5d577e-adfd-4726-9fda-627da7dff544"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.785559 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/109e73cd-4c9d-4841-9e95-950166f5cda0-kube-api-access-bhxj2" (OuterVolumeSpecName: "kube-api-access-bhxj2") pod "109e73cd-4c9d-4841-9e95-950166f5cda0" (UID: "109e73cd-4c9d-4841-9e95-950166f5cda0"). InnerVolumeSpecName "kube-api-access-bhxj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.790757 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d5d577e-adfd-4726-9fda-627da7dff544-kube-api-access-x9rgt" (OuterVolumeSpecName: "kube-api-access-x9rgt") pod "8d5d577e-adfd-4726-9fda-627da7dff544" (UID: "8d5d577e-adfd-4726-9fda-627da7dff544"). InnerVolumeSpecName "kube-api-access-x9rgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.807990 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/109e73cd-4c9d-4841-9e95-950166f5cda0-scripts" (OuterVolumeSpecName: "scripts") pod "109e73cd-4c9d-4841-9e95-950166f5cda0" (UID: "109e73cd-4c9d-4841-9e95-950166f5cda0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.813837 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/109e73cd-4c9d-4841-9e95-950166f5cda0-config-data" (OuterVolumeSpecName: "config-data") pod "109e73cd-4c9d-4841-9e95-950166f5cda0" (UID: "109e73cd-4c9d-4841-9e95-950166f5cda0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.834272 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d5d577e-adfd-4726-9fda-627da7dff544-config-data" (OuterVolumeSpecName: "config-data") pod "8d5d577e-adfd-4726-9fda-627da7dff544" (UID: "8d5d577e-adfd-4726-9fda-627da7dff544"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.878152 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/109e73cd-4c9d-4841-9e95-950166f5cda0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.878195 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d5d577e-adfd-4726-9fda-627da7dff544-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.878204 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9rgt\" (UniqueName: \"kubernetes.io/projected/8d5d577e-adfd-4726-9fda-627da7dff544-kube-api-access-x9rgt\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.878213 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d5d577e-adfd-4726-9fda-627da7dff544-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.878222 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/109e73cd-4c9d-4841-9e95-950166f5cda0-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:55 crc kubenswrapper[4786]: I0127 13:36:55.878230 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhxj2\" (UniqueName: \"kubernetes.io/projected/109e73cd-4c9d-4841-9e95-950166f5cda0-kube-api-access-bhxj2\") on node \"crc\" DevicePath \"\"" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.532013 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" event={"ID":"8d5d577e-adfd-4726-9fda-627da7dff544","Type":"ContainerDied","Data":"fa7db149003041de3e3a23fdc5facff65add4a718a6790f91e98e7d130311545"} Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.532475 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa7db149003041de3e3a23fdc5facff65add4a718a6790f91e98e7d130311545" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.532107 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.535590 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" event={"ID":"109e73cd-4c9d-4841-9e95-950166f5cda0","Type":"ContainerDied","Data":"d6416d1cd89f3fbababc432ca492fa03f1e3b4307c444aa73ef43a569669d625"} Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.535833 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6416d1cd89f3fbababc432ca492fa03f1e3b4307c444aa73ef43a569669d625" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.535807 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.544054 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"b59b5bdb-15fd-4a34-bdcc-1b64a3795f10","Type":"ContainerStarted","Data":"e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330"} Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.545729 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.575397 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.598191 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podStartSLOduration=2.375599937 podStartE2EDuration="17.598171834s" podCreationTimestamp="2026-01-27 13:36:39 +0000 UTC" firstStartedPulling="2026-01-27 13:36:40.838446491 +0000 UTC m=+1784.049060610" lastFinishedPulling="2026-01-27 13:36:56.061018388 +0000 UTC m=+1799.271632507" observedRunningTime="2026-01-27 13:36:56.563190838 +0000 UTC m=+1799.773804977" watchObservedRunningTime="2026-01-27 13:36:56.598171834 +0000 UTC m=+1799.808785953" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.846216 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:36:56 crc kubenswrapper[4786]: E0127 13:36:56.847443 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d5d577e-adfd-4726-9fda-627da7dff544" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.847531 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d5d577e-adfd-4726-9fda-627da7dff544" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 27 13:36:56 crc kubenswrapper[4786]: E0127 13:36:56.847628 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="109e73cd-4c9d-4841-9e95-950166f5cda0" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.847797 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="109e73cd-4c9d-4841-9e95-950166f5cda0" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.848077 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="109e73cd-4c9d-4841-9e95-950166f5cda0" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.848164 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d5d577e-adfd-4726-9fda-627da7dff544" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.848859 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.856617 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.866915 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.906598 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.907969 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.910713 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 27 13:36:56 crc kubenswrapper[4786]: I0127 13:36:56.915482 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:36:57 crc kubenswrapper[4786]: I0127 13:36:57.004706 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b02f7325-70bc-4165-b5a4-1b5d75bd397e-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"b02f7325-70bc-4165-b5a4-1b5d75bd397e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:36:57 crc kubenswrapper[4786]: I0127 13:36:57.004799 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade06765-40a5-4b74-a4fb-89726ae6c9d8-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"ade06765-40a5-4b74-a4fb-89726ae6c9d8\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:36:57 crc kubenswrapper[4786]: I0127 13:36:57.004836 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztw2q\" (UniqueName: \"kubernetes.io/projected/b02f7325-70bc-4165-b5a4-1b5d75bd397e-kube-api-access-ztw2q\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"b02f7325-70bc-4165-b5a4-1b5d75bd397e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:36:57 crc kubenswrapper[4786]: I0127 13:36:57.004872 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg69h\" (UniqueName: \"kubernetes.io/projected/ade06765-40a5-4b74-a4fb-89726ae6c9d8-kube-api-access-kg69h\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"ade06765-40a5-4b74-a4fb-89726ae6c9d8\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:36:57 crc kubenswrapper[4786]: I0127 13:36:57.106030 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b02f7325-70bc-4165-b5a4-1b5d75bd397e-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"b02f7325-70bc-4165-b5a4-1b5d75bd397e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:36:57 crc kubenswrapper[4786]: I0127 13:36:57.106358 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade06765-40a5-4b74-a4fb-89726ae6c9d8-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"ade06765-40a5-4b74-a4fb-89726ae6c9d8\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:36:57 crc kubenswrapper[4786]: I0127 13:36:57.106468 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztw2q\" (UniqueName: \"kubernetes.io/projected/b02f7325-70bc-4165-b5a4-1b5d75bd397e-kube-api-access-ztw2q\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"b02f7325-70bc-4165-b5a4-1b5d75bd397e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:36:57 crc kubenswrapper[4786]: I0127 13:36:57.106576 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg69h\" (UniqueName: \"kubernetes.io/projected/ade06765-40a5-4b74-a4fb-89726ae6c9d8-kube-api-access-kg69h\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"ade06765-40a5-4b74-a4fb-89726ae6c9d8\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:36:57 crc kubenswrapper[4786]: I0127 13:36:57.112156 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade06765-40a5-4b74-a4fb-89726ae6c9d8-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"ade06765-40a5-4b74-a4fb-89726ae6c9d8\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:36:57 crc kubenswrapper[4786]: I0127 13:36:57.124067 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b02f7325-70bc-4165-b5a4-1b5d75bd397e-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"b02f7325-70bc-4165-b5a4-1b5d75bd397e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:36:57 crc kubenswrapper[4786]: I0127 13:36:57.127354 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztw2q\" (UniqueName: \"kubernetes.io/projected/b02f7325-70bc-4165-b5a4-1b5d75bd397e-kube-api-access-ztw2q\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"b02f7325-70bc-4165-b5a4-1b5d75bd397e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:36:57 crc kubenswrapper[4786]: I0127 13:36:57.143215 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg69h\" (UniqueName: \"kubernetes.io/projected/ade06765-40a5-4b74-a4fb-89726ae6c9d8-kube-api-access-kg69h\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"ade06765-40a5-4b74-a4fb-89726ae6c9d8\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:36:57 crc kubenswrapper[4786]: I0127 13:36:57.169571 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:36:57 crc kubenswrapper[4786]: I0127 13:36:57.226875 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:36:57 crc kubenswrapper[4786]: I0127 13:36:57.590422 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:36:57 crc kubenswrapper[4786]: W0127 13:36:57.593122 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb02f7325_70bc_4165_b5a4_1b5d75bd397e.slice/crio-a064cca09c40a60313a91a8e9e771d247c9424f3be013bc91784e302668cbe4c WatchSource:0}: Error finding container a064cca09c40a60313a91a8e9e771d247c9424f3be013bc91784e302668cbe4c: Status 404 returned error can't find the container with id a064cca09c40a60313a91a8e9e771d247c9424f3be013bc91784e302668cbe4c Jan 27 13:36:57 crc kubenswrapper[4786]: I0127 13:36:57.732517 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:36:57 crc kubenswrapper[4786]: W0127 13:36:57.739654 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podade06765_40a5_4b74_a4fb_89726ae6c9d8.slice/crio-e84bb6ad2c5c015b349e6acfb82446c1b73d483ec44e4f5f3060815d06c1874e WatchSource:0}: Error finding container e84bb6ad2c5c015b349e6acfb82446c1b73d483ec44e4f5f3060815d06c1874e: Status 404 returned error can't find the container with id e84bb6ad2c5c015b349e6acfb82446c1b73d483ec44e4f5f3060815d06c1874e Jan 27 13:36:58 crc kubenswrapper[4786]: I0127 13:36:58.574479 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"ade06765-40a5-4b74-a4fb-89726ae6c9d8","Type":"ContainerStarted","Data":"e84bb6ad2c5c015b349e6acfb82446c1b73d483ec44e4f5f3060815d06c1874e"} Jan 27 13:36:58 crc kubenswrapper[4786]: I0127 13:36:58.577125 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"b02f7325-70bc-4165-b5a4-1b5d75bd397e","Type":"ContainerStarted","Data":"a064cca09c40a60313a91a8e9e771d247c9424f3be013bc91784e302668cbe4c"} Jan 27 13:36:58 crc kubenswrapper[4786]: I0127 13:36:58.905546 4786 scope.go:117] "RemoveContainer" containerID="4603d55f7714e45336cb9d5909533599f47684e611d86a0704f656c68b418b35" Jan 27 13:36:58 crc kubenswrapper[4786]: I0127 13:36:58.949595 4786 scope.go:117] "RemoveContainer" containerID="ff55188d60a092bcd324496b453bf4b7dd890b3114ee21c84c242a9731df73ab" Jan 27 13:36:58 crc kubenswrapper[4786]: I0127 13:36:58.983839 4786 scope.go:117] "RemoveContainer" containerID="07d37153204b0a782613da8a8e923a9a2c87f2bd5c733ca15208b93851f5145c" Jan 27 13:36:59 crc kubenswrapper[4786]: I0127 13:36:59.002953 4786 scope.go:117] "RemoveContainer" containerID="e14006dbe8e2adfabe4164de8a7d51b722809ddb5dc2e850803133f07afadeee" Jan 27 13:36:59 crc kubenswrapper[4786]: I0127 13:36:59.038488 4786 scope.go:117] "RemoveContainer" containerID="9b438c7363fe3efde2c4c826c373f3bc6a9166501adbb9ac016ee740e66f09c8" Jan 27 13:36:59 crc kubenswrapper[4786]: I0127 13:36:59.063668 4786 scope.go:117] "RemoveContainer" containerID="3cb2becea2a689c4096fc0ffe5eb38d7074024a664c3fca7bbb052a1cfd72a4d" Jan 27 13:36:59 crc kubenswrapper[4786]: I0127 13:36:59.110689 4786 scope.go:117] "RemoveContainer" containerID="008416ea6a7b9aeb70f9e6f50e083b0dabb18aaccfd570c460f0464be175c06f" Jan 27 13:36:59 crc kubenswrapper[4786]: I0127 13:36:59.146710 4786 scope.go:117] "RemoveContainer" containerID="21172873558a9e8c39b8255e7ac10559b9658bc6c3d25e87772f7b614d76b9d9" Jan 27 13:36:59 crc kubenswrapper[4786]: I0127 13:36:59.586202 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"b02f7325-70bc-4165-b5a4-1b5d75bd397e","Type":"ContainerStarted","Data":"7bf87927d952736571c7a217ec69caf0056e64bc00ae532d6f71262bc5401133"} Jan 27 13:36:59 crc kubenswrapper[4786]: I0127 13:36:59.586312 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:36:59 crc kubenswrapper[4786]: I0127 13:36:59.588560 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"ade06765-40a5-4b74-a4fb-89726ae6c9d8","Type":"ContainerStarted","Data":"a25ca61f1a71b745764506d5e7439363114124b732141b028488fbb2ea11b7f1"} Jan 27 13:36:59 crc kubenswrapper[4786]: I0127 13:36:59.588802 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:36:59 crc kubenswrapper[4786]: I0127 13:36:59.611003 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=3.610984631 podStartE2EDuration="3.610984631s" podCreationTimestamp="2026-01-27 13:36:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:36:59.604285697 +0000 UTC m=+1802.814899836" watchObservedRunningTime="2026-01-27 13:36:59.610984631 +0000 UTC m=+1802.821598750" Jan 27 13:36:59 crc kubenswrapper[4786]: I0127 13:36:59.628645 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=3.628625753 podStartE2EDuration="3.628625753s" podCreationTimestamp="2026-01-27 13:36:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:36:59.623847392 +0000 UTC m=+1802.834461521" watchObservedRunningTime="2026-01-27 13:36:59.628625753 +0000 UTC m=+1802.839239882" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.192161 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.253799 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.471456 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:37:07 crc kubenswrapper[4786]: E0127 13:37:07.471917 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.636021 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw"] Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.637306 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.650489 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.650747 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.653731 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw"] Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.684826 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2"] Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.686544 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.709147 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2"] Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.782725 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-config-data\") pod \"nova-kuttl-cell1-cell-mapping-5smpw\" (UID: \"a3fad1ba-93ff-4712-bfaa-17cb808e87f4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.782785 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5f44e04-2290-4929-ac72-51dd6213663d-scripts\") pod \"nova-kuttl-cell1-host-discover-695l2\" (UID: \"c5f44e04-2290-4929-ac72-51dd6213663d\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.782830 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f44e04-2290-4929-ac72-51dd6213663d-config-data\") pod \"nova-kuttl-cell1-host-discover-695l2\" (UID: \"c5f44e04-2290-4929-ac72-51dd6213663d\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.782966 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-scripts\") pod \"nova-kuttl-cell1-cell-mapping-5smpw\" (UID: \"a3fad1ba-93ff-4712-bfaa-17cb808e87f4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.782988 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvsgr\" (UniqueName: \"kubernetes.io/projected/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-kube-api-access-vvsgr\") pod \"nova-kuttl-cell1-cell-mapping-5smpw\" (UID: \"a3fad1ba-93ff-4712-bfaa-17cb808e87f4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.783038 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txxns\" (UniqueName: \"kubernetes.io/projected/c5f44e04-2290-4929-ac72-51dd6213663d-kube-api-access-txxns\") pod \"nova-kuttl-cell1-host-discover-695l2\" (UID: \"c5f44e04-2290-4929-ac72-51dd6213663d\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.884275 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-config-data\") pod \"nova-kuttl-cell1-cell-mapping-5smpw\" (UID: \"a3fad1ba-93ff-4712-bfaa-17cb808e87f4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.884328 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5f44e04-2290-4929-ac72-51dd6213663d-scripts\") pod \"nova-kuttl-cell1-host-discover-695l2\" (UID: \"c5f44e04-2290-4929-ac72-51dd6213663d\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.884377 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f44e04-2290-4929-ac72-51dd6213663d-config-data\") pod \"nova-kuttl-cell1-host-discover-695l2\" (UID: \"c5f44e04-2290-4929-ac72-51dd6213663d\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.884525 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-scripts\") pod \"nova-kuttl-cell1-cell-mapping-5smpw\" (UID: \"a3fad1ba-93ff-4712-bfaa-17cb808e87f4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.884551 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvsgr\" (UniqueName: \"kubernetes.io/projected/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-kube-api-access-vvsgr\") pod \"nova-kuttl-cell1-cell-mapping-5smpw\" (UID: \"a3fad1ba-93ff-4712-bfaa-17cb808e87f4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.884593 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txxns\" (UniqueName: \"kubernetes.io/projected/c5f44e04-2290-4929-ac72-51dd6213663d-kube-api-access-txxns\") pod \"nova-kuttl-cell1-host-discover-695l2\" (UID: \"c5f44e04-2290-4929-ac72-51dd6213663d\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.890525 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-config-data\") pod \"nova-kuttl-cell1-cell-mapping-5smpw\" (UID: \"a3fad1ba-93ff-4712-bfaa-17cb808e87f4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.890747 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5f44e04-2290-4929-ac72-51dd6213663d-scripts\") pod \"nova-kuttl-cell1-host-discover-695l2\" (UID: \"c5f44e04-2290-4929-ac72-51dd6213663d\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.898224 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f44e04-2290-4929-ac72-51dd6213663d-config-data\") pod \"nova-kuttl-cell1-host-discover-695l2\" (UID: \"c5f44e04-2290-4929-ac72-51dd6213663d\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.905630 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-scripts\") pod \"nova-kuttl-cell1-cell-mapping-5smpw\" (UID: \"a3fad1ba-93ff-4712-bfaa-17cb808e87f4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.905687 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txxns\" (UniqueName: \"kubernetes.io/projected/c5f44e04-2290-4929-ac72-51dd6213663d-kube-api-access-txxns\") pod \"nova-kuttl-cell1-host-discover-695l2\" (UID: \"c5f44e04-2290-4929-ac72-51dd6213663d\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.908319 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvsgr\" (UniqueName: \"kubernetes.io/projected/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-kube-api-access-vvsgr\") pod \"nova-kuttl-cell1-cell-mapping-5smpw\" (UID: \"a3fad1ba-93ff-4712-bfaa-17cb808e87f4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.976716 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.997336 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7"] Jan 27 13:37:07 crc kubenswrapper[4786]: I0127 13:37:07.998590 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.003373 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.004672 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.004884 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.030748 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7"] Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.088950 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-config-data\") pod \"nova-kuttl-cell0-cell-mapping-d82l7\" (UID: \"582cddc3-f46b-47c9-8aad-1b38afaf5cc0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.089200 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhnvw\" (UniqueName: \"kubernetes.io/projected/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-kube-api-access-qhnvw\") pod \"nova-kuttl-cell0-cell-mapping-d82l7\" (UID: \"582cddc3-f46b-47c9-8aad-1b38afaf5cc0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.089229 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-scripts\") pod \"nova-kuttl-cell0-cell-mapping-d82l7\" (UID: \"582cddc3-f46b-47c9-8aad-1b38afaf5cc0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.137383 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.139522 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.143042 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.149361 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.191303 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-config-data\") pod \"nova-kuttl-cell0-cell-mapping-d82l7\" (UID: \"582cddc3-f46b-47c9-8aad-1b38afaf5cc0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.191355 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhnvw\" (UniqueName: \"kubernetes.io/projected/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-kube-api-access-qhnvw\") pod \"nova-kuttl-cell0-cell-mapping-d82l7\" (UID: \"582cddc3-f46b-47c9-8aad-1b38afaf5cc0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.191389 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-scripts\") pod \"nova-kuttl-cell0-cell-mapping-d82l7\" (UID: \"582cddc3-f46b-47c9-8aad-1b38afaf5cc0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.204985 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-scripts\") pod \"nova-kuttl-cell0-cell-mapping-d82l7\" (UID: \"582cddc3-f46b-47c9-8aad-1b38afaf5cc0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.205726 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-config-data\") pod \"nova-kuttl-cell0-cell-mapping-d82l7\" (UID: \"582cddc3-f46b-47c9-8aad-1b38afaf5cc0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.215704 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.216628 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.218513 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.220537 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhnvw\" (UniqueName: \"kubernetes.io/projected/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-kube-api-access-qhnvw\") pod \"nova-kuttl-cell0-cell-mapping-d82l7\" (UID: \"582cddc3-f46b-47c9-8aad-1b38afaf5cc0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.240362 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.249984 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.251285 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.254025 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.263818 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.292529 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ad6eb3-4132-421f-b81d-4bbc6deeea83-config-data\") pod \"nova-kuttl-api-0\" (UID: \"36ad6eb3-4132-421f-b81d-4bbc6deeea83\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.292668 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36ad6eb3-4132-421f-b81d-4bbc6deeea83-logs\") pod \"nova-kuttl-api-0\" (UID: \"36ad6eb3-4132-421f-b81d-4bbc6deeea83\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.292695 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7prcp\" (UniqueName: \"kubernetes.io/projected/36ad6eb3-4132-421f-b81d-4bbc6deeea83-kube-api-access-7prcp\") pod \"nova-kuttl-api-0\" (UID: \"36ad6eb3-4132-421f-b81d-4bbc6deeea83\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.395548 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7prcp\" (UniqueName: \"kubernetes.io/projected/36ad6eb3-4132-421f-b81d-4bbc6deeea83-kube-api-access-7prcp\") pod \"nova-kuttl-api-0\" (UID: \"36ad6eb3-4132-421f-b81d-4bbc6deeea83\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.395631 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b75d8e-eec6-4eba-9e92-6b4fd493539e-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"84b75d8e-eec6-4eba-9e92-6b4fd493539e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.395663 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z82zm\" (UniqueName: \"kubernetes.io/projected/84b75d8e-eec6-4eba-9e92-6b4fd493539e-kube-api-access-z82zm\") pod \"nova-kuttl-scheduler-0\" (UID: \"84b75d8e-eec6-4eba-9e92-6b4fd493539e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.395711 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ad6eb3-4132-421f-b81d-4bbc6deeea83-config-data\") pod \"nova-kuttl-api-0\" (UID: \"36ad6eb3-4132-421f-b81d-4bbc6deeea83\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.395731 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a751f973-3974-4eca-9df3-6d5467f11ecf-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"a751f973-3974-4eca-9df3-6d5467f11ecf\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.395954 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a751f973-3974-4eca-9df3-6d5467f11ecf-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"a751f973-3974-4eca-9df3-6d5467f11ecf\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.395989 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffhmk\" (UniqueName: \"kubernetes.io/projected/a751f973-3974-4eca-9df3-6d5467f11ecf-kube-api-access-ffhmk\") pod \"nova-kuttl-metadata-0\" (UID: \"a751f973-3974-4eca-9df3-6d5467f11ecf\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.396053 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36ad6eb3-4132-421f-b81d-4bbc6deeea83-logs\") pod \"nova-kuttl-api-0\" (UID: \"36ad6eb3-4132-421f-b81d-4bbc6deeea83\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.396807 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36ad6eb3-4132-421f-b81d-4bbc6deeea83-logs\") pod \"nova-kuttl-api-0\" (UID: \"36ad6eb3-4132-421f-b81d-4bbc6deeea83\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.402318 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ad6eb3-4132-421f-b81d-4bbc6deeea83-config-data\") pod \"nova-kuttl-api-0\" (UID: \"36ad6eb3-4132-421f-b81d-4bbc6deeea83\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.416575 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7prcp\" (UniqueName: \"kubernetes.io/projected/36ad6eb3-4132-421f-b81d-4bbc6deeea83-kube-api-access-7prcp\") pod \"nova-kuttl-api-0\" (UID: \"36ad6eb3-4132-421f-b81d-4bbc6deeea83\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.422832 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.501672 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b75d8e-eec6-4eba-9e92-6b4fd493539e-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"84b75d8e-eec6-4eba-9e92-6b4fd493539e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.501719 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z82zm\" (UniqueName: \"kubernetes.io/projected/84b75d8e-eec6-4eba-9e92-6b4fd493539e-kube-api-access-z82zm\") pod \"nova-kuttl-scheduler-0\" (UID: \"84b75d8e-eec6-4eba-9e92-6b4fd493539e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.501768 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a751f973-3974-4eca-9df3-6d5467f11ecf-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"a751f973-3974-4eca-9df3-6d5467f11ecf\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.501795 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a751f973-3974-4eca-9df3-6d5467f11ecf-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"a751f973-3974-4eca-9df3-6d5467f11ecf\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.501812 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffhmk\" (UniqueName: \"kubernetes.io/projected/a751f973-3974-4eca-9df3-6d5467f11ecf-kube-api-access-ffhmk\") pod \"nova-kuttl-metadata-0\" (UID: \"a751f973-3974-4eca-9df3-6d5467f11ecf\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.503460 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.503511 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a751f973-3974-4eca-9df3-6d5467f11ecf-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"a751f973-3974-4eca-9df3-6d5467f11ecf\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.506483 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b75d8e-eec6-4eba-9e92-6b4fd493539e-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"84b75d8e-eec6-4eba-9e92-6b4fd493539e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.523411 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z82zm\" (UniqueName: \"kubernetes.io/projected/84b75d8e-eec6-4eba-9e92-6b4fd493539e-kube-api-access-z82zm\") pod \"nova-kuttl-scheduler-0\" (UID: \"84b75d8e-eec6-4eba-9e92-6b4fd493539e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.527377 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffhmk\" (UniqueName: \"kubernetes.io/projected/a751f973-3974-4eca-9df3-6d5467f11ecf-kube-api-access-ffhmk\") pod \"nova-kuttl-metadata-0\" (UID: \"a751f973-3974-4eca-9df3-6d5467f11ecf\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.530004 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a751f973-3974-4eca-9df3-6d5467f11ecf-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"a751f973-3974-4eca-9df3-6d5467f11ecf\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.552171 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.560384 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw"] Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.602559 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.664475 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2"] Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.697407 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" event={"ID":"a3fad1ba-93ff-4712-bfaa-17cb808e87f4","Type":"ContainerStarted","Data":"52b70c097a23959d9ee7bc86fd6eba4d5d11684e5fe395dc2e6297230754c1ee"} Jan 27 13:37:08 crc kubenswrapper[4786]: I0127 13:37:08.957296 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7"] Jan 27 13:37:08 crc kubenswrapper[4786]: W0127 13:37:08.975499 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod582cddc3_f46b_47c9_8aad_1b38afaf5cc0.slice/crio-ae2fd523dfbfdb39e37c009e42a5a88d2aa061df4f6fdcf09cb2e3863e29226e WatchSource:0}: Error finding container ae2fd523dfbfdb39e37c009e42a5a88d2aa061df4f6fdcf09cb2e3863e29226e: Status 404 returned error can't find the container with id ae2fd523dfbfdb39e37c009e42a5a88d2aa061df4f6fdcf09cb2e3863e29226e Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.036774 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-sync-jkjg4"] Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.044374 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-sync-jkjg4"] Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.097807 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.109022 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.294954 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.474166 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c0d1344-182d-4ada-9aa9-0f105aaaccc6" path="/var/lib/kubelet/pods/1c0d1344-182d-4ada-9aa9-0f105aaaccc6/volumes" Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.705562 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"84b75d8e-eec6-4eba-9e92-6b4fd493539e","Type":"ContainerStarted","Data":"c9bde321a409139df282d041e1a0215a02ee9095c6fe09e7acf90a257e7c55f6"} Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.705625 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"84b75d8e-eec6-4eba-9e92-6b4fd493539e","Type":"ContainerStarted","Data":"db8d0b84f1adcce0002c0c20abb258934b9da506fa5f98ce87f223646671a489"} Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.711558 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" event={"ID":"582cddc3-f46b-47c9-8aad-1b38afaf5cc0","Type":"ContainerStarted","Data":"9207e066c51944cff29e83b28af6f19b1a9afcb3a7a370da0790d8269def2295"} Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.711588 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" event={"ID":"582cddc3-f46b-47c9-8aad-1b38afaf5cc0","Type":"ContainerStarted","Data":"ae2fd523dfbfdb39e37c009e42a5a88d2aa061df4f6fdcf09cb2e3863e29226e"} Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.720097 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.720061589 podStartE2EDuration="1.720061589s" podCreationTimestamp="2026-01-27 13:37:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:37:09.718413524 +0000 UTC m=+1812.929027643" watchObservedRunningTime="2026-01-27 13:37:09.720061589 +0000 UTC m=+1812.930675708" Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.725755 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"36ad6eb3-4132-421f-b81d-4bbc6deeea83","Type":"ContainerStarted","Data":"d258d75c219e029f49d3698972016e3e145d38b96fa8ffa10d051e26c097f1e4"} Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.725802 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"36ad6eb3-4132-421f-b81d-4bbc6deeea83","Type":"ContainerStarted","Data":"f6fa53d30bffa1afff4d6ce8ecd830120aaa50cbeb15dc5872064d9e96d0224d"} Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.734015 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" event={"ID":"a3fad1ba-93ff-4712-bfaa-17cb808e87f4","Type":"ContainerStarted","Data":"0e6a8752f4314f3e4f2a1269aa8e2d4a0b2f48b7784f3ab7a6d3541ad48948f5"} Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.737758 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"a751f973-3974-4eca-9df3-6d5467f11ecf","Type":"ContainerStarted","Data":"de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c"} Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.737931 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"a751f973-3974-4eca-9df3-6d5467f11ecf","Type":"ContainerStarted","Data":"c10d4ed4c5ef3b785b6dc2f493adead375591312fc2d54855dd33615b9085210"} Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.739962 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" event={"ID":"c5f44e04-2290-4929-ac72-51dd6213663d","Type":"ContainerStarted","Data":"2f7e6fb14ffc91c2f74279b18a4dd680785aba092eeeabb74b5f995cbd303edd"} Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.740082 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" event={"ID":"c5f44e04-2290-4929-ac72-51dd6213663d","Type":"ContainerStarted","Data":"e45e12bd74ce959d6dd278659ca24f8af392841f3501c2d00ffcf7907e992eae"} Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.753016 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" podStartSLOduration=2.7530013589999998 podStartE2EDuration="2.753001359s" podCreationTimestamp="2026-01-27 13:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:37:09.734349729 +0000 UTC m=+1812.944963848" watchObservedRunningTime="2026-01-27 13:37:09.753001359 +0000 UTC m=+1812.963615478" Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.760993 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" podStartSLOduration=2.760979148 podStartE2EDuration="2.760979148s" podCreationTimestamp="2026-01-27 13:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:37:09.74717913 +0000 UTC m=+1812.957793249" watchObservedRunningTime="2026-01-27 13:37:09.760979148 +0000 UTC m=+1812.971593267" Jan 27 13:37:09 crc kubenswrapper[4786]: I0127 13:37:09.768660 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" podStartSLOduration=2.768644167 podStartE2EDuration="2.768644167s" podCreationTimestamp="2026-01-27 13:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:37:09.763440325 +0000 UTC m=+1812.974054444" watchObservedRunningTime="2026-01-27 13:37:09.768644167 +0000 UTC m=+1812.979258286" Jan 27 13:37:10 crc kubenswrapper[4786]: I0127 13:37:10.763506 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"36ad6eb3-4132-421f-b81d-4bbc6deeea83","Type":"ContainerStarted","Data":"79b46b8234357c908a65d628740e23ba3a112a998919b0d433281824d46b5de2"} Jan 27 13:37:10 crc kubenswrapper[4786]: I0127 13:37:10.775510 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"a751f973-3974-4eca-9df3-6d5467f11ecf","Type":"ContainerStarted","Data":"413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1"} Jan 27 13:37:10 crc kubenswrapper[4786]: I0127 13:37:10.792806 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.792789006 podStartE2EDuration="2.792789006s" podCreationTimestamp="2026-01-27 13:37:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:37:10.786912345 +0000 UTC m=+1813.997526464" watchObservedRunningTime="2026-01-27 13:37:10.792789006 +0000 UTC m=+1814.003403125" Jan 27 13:37:10 crc kubenswrapper[4786]: I0127 13:37:10.814410 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.814390456 podStartE2EDuration="2.814390456s" podCreationTimestamp="2026-01-27 13:37:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:37:10.807023135 +0000 UTC m=+1814.017637254" watchObservedRunningTime="2026-01-27 13:37:10.814390456 +0000 UTC m=+1814.025004565" Jan 27 13:37:12 crc kubenswrapper[4786]: I0127 13:37:12.792010 4786 generic.go:334] "Generic (PLEG): container finished" podID="c5f44e04-2290-4929-ac72-51dd6213663d" containerID="2f7e6fb14ffc91c2f74279b18a4dd680785aba092eeeabb74b5f995cbd303edd" exitCode=255 Jan 27 13:37:12 crc kubenswrapper[4786]: I0127 13:37:12.792114 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" event={"ID":"c5f44e04-2290-4929-ac72-51dd6213663d","Type":"ContainerDied","Data":"2f7e6fb14ffc91c2f74279b18a4dd680785aba092eeeabb74b5f995cbd303edd"} Jan 27 13:37:12 crc kubenswrapper[4786]: I0127 13:37:12.792915 4786 scope.go:117] "RemoveContainer" containerID="2f7e6fb14ffc91c2f74279b18a4dd680785aba092eeeabb74b5f995cbd303edd" Jan 27 13:37:13 crc kubenswrapper[4786]: I0127 13:37:13.553512 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:13 crc kubenswrapper[4786]: I0127 13:37:13.603689 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:13 crc kubenswrapper[4786]: I0127 13:37:13.603748 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:13 crc kubenswrapper[4786]: I0127 13:37:13.801525 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" event={"ID":"c5f44e04-2290-4929-ac72-51dd6213663d","Type":"ContainerStarted","Data":"be632094cabe357e076c0316ba3c28d193938661eaf06b02fbb9ba8532faee2d"} Jan 27 13:37:14 crc kubenswrapper[4786]: E0127 13:37:14.466020 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod582cddc3_f46b_47c9_8aad_1b38afaf5cc0.slice/crio-conmon-9207e066c51944cff29e83b28af6f19b1a9afcb3a7a370da0790d8269def2295.scope\": RecentStats: unable to find data in memory cache]" Jan 27 13:37:14 crc kubenswrapper[4786]: I0127 13:37:14.813433 4786 generic.go:334] "Generic (PLEG): container finished" podID="582cddc3-f46b-47c9-8aad-1b38afaf5cc0" containerID="9207e066c51944cff29e83b28af6f19b1a9afcb3a7a370da0790d8269def2295" exitCode=0 Jan 27 13:37:14 crc kubenswrapper[4786]: I0127 13:37:14.813526 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" event={"ID":"582cddc3-f46b-47c9-8aad-1b38afaf5cc0","Type":"ContainerDied","Data":"9207e066c51944cff29e83b28af6f19b1a9afcb3a7a370da0790d8269def2295"} Jan 27 13:37:14 crc kubenswrapper[4786]: I0127 13:37:14.815788 4786 generic.go:334] "Generic (PLEG): container finished" podID="a3fad1ba-93ff-4712-bfaa-17cb808e87f4" containerID="0e6a8752f4314f3e4f2a1269aa8e2d4a0b2f48b7784f3ab7a6d3541ad48948f5" exitCode=0 Jan 27 13:37:14 crc kubenswrapper[4786]: I0127 13:37:14.815903 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" event={"ID":"a3fad1ba-93ff-4712-bfaa-17cb808e87f4","Type":"ContainerDied","Data":"0e6a8752f4314f3e4f2a1269aa8e2d4a0b2f48b7784f3ab7a6d3541ad48948f5"} Jan 27 13:37:15 crc kubenswrapper[4786]: I0127 13:37:15.825239 4786 generic.go:334] "Generic (PLEG): container finished" podID="c5f44e04-2290-4929-ac72-51dd6213663d" containerID="be632094cabe357e076c0316ba3c28d193938661eaf06b02fbb9ba8532faee2d" exitCode=0 Jan 27 13:37:15 crc kubenswrapper[4786]: I0127 13:37:15.825327 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" event={"ID":"c5f44e04-2290-4929-ac72-51dd6213663d","Type":"ContainerDied","Data":"be632094cabe357e076c0316ba3c28d193938661eaf06b02fbb9ba8532faee2d"} Jan 27 13:37:15 crc kubenswrapper[4786]: I0127 13:37:15.826130 4786 scope.go:117] "RemoveContainer" containerID="2f7e6fb14ffc91c2f74279b18a4dd680785aba092eeeabb74b5f995cbd303edd" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.184967 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.201421 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.267489 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-config-data\") pod \"582cddc3-f46b-47c9-8aad-1b38afaf5cc0\" (UID: \"582cddc3-f46b-47c9-8aad-1b38afaf5cc0\") " Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.267596 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvsgr\" (UniqueName: \"kubernetes.io/projected/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-kube-api-access-vvsgr\") pod \"a3fad1ba-93ff-4712-bfaa-17cb808e87f4\" (UID: \"a3fad1ba-93ff-4712-bfaa-17cb808e87f4\") " Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.267674 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhnvw\" (UniqueName: \"kubernetes.io/projected/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-kube-api-access-qhnvw\") pod \"582cddc3-f46b-47c9-8aad-1b38afaf5cc0\" (UID: \"582cddc3-f46b-47c9-8aad-1b38afaf5cc0\") " Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.268717 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-scripts\") pod \"a3fad1ba-93ff-4712-bfaa-17cb808e87f4\" (UID: \"a3fad1ba-93ff-4712-bfaa-17cb808e87f4\") " Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.268814 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-scripts\") pod \"582cddc3-f46b-47c9-8aad-1b38afaf5cc0\" (UID: \"582cddc3-f46b-47c9-8aad-1b38afaf5cc0\") " Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.268856 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-config-data\") pod \"a3fad1ba-93ff-4712-bfaa-17cb808e87f4\" (UID: \"a3fad1ba-93ff-4712-bfaa-17cb808e87f4\") " Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.273199 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-kube-api-access-qhnvw" (OuterVolumeSpecName: "kube-api-access-qhnvw") pod "582cddc3-f46b-47c9-8aad-1b38afaf5cc0" (UID: "582cddc3-f46b-47c9-8aad-1b38afaf5cc0"). InnerVolumeSpecName "kube-api-access-qhnvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.273468 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-scripts" (OuterVolumeSpecName: "scripts") pod "a3fad1ba-93ff-4712-bfaa-17cb808e87f4" (UID: "a3fad1ba-93ff-4712-bfaa-17cb808e87f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.274302 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-scripts" (OuterVolumeSpecName: "scripts") pod "582cddc3-f46b-47c9-8aad-1b38afaf5cc0" (UID: "582cddc3-f46b-47c9-8aad-1b38afaf5cc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.290365 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-config-data" (OuterVolumeSpecName: "config-data") pod "582cddc3-f46b-47c9-8aad-1b38afaf5cc0" (UID: "582cddc3-f46b-47c9-8aad-1b38afaf5cc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.291061 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-config-data" (OuterVolumeSpecName: "config-data") pod "a3fad1ba-93ff-4712-bfaa-17cb808e87f4" (UID: "a3fad1ba-93ff-4712-bfaa-17cb808e87f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.296831 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-kube-api-access-vvsgr" (OuterVolumeSpecName: "kube-api-access-vvsgr") pod "a3fad1ba-93ff-4712-bfaa-17cb808e87f4" (UID: "a3fad1ba-93ff-4712-bfaa-17cb808e87f4"). InnerVolumeSpecName "kube-api-access-vvsgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.370661 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvsgr\" (UniqueName: \"kubernetes.io/projected/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-kube-api-access-vvsgr\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.370781 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhnvw\" (UniqueName: \"kubernetes.io/projected/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-kube-api-access-qhnvw\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.370792 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.370801 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.370811 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3fad1ba-93ff-4712-bfaa-17cb808e87f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.370821 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582cddc3-f46b-47c9-8aad-1b38afaf5cc0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.837830 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" event={"ID":"582cddc3-f46b-47c9-8aad-1b38afaf5cc0","Type":"ContainerDied","Data":"ae2fd523dfbfdb39e37c009e42a5a88d2aa061df4f6fdcf09cb2e3863e29226e"} Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.837862 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.837874 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae2fd523dfbfdb39e37c009e42a5a88d2aa061df4f6fdcf09cb2e3863e29226e" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.842986 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" event={"ID":"a3fad1ba-93ff-4712-bfaa-17cb808e87f4","Type":"ContainerDied","Data":"52b70c097a23959d9ee7bc86fd6eba4d5d11684e5fe395dc2e6297230754c1ee"} Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.843028 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52b70c097a23959d9ee7bc86fd6eba4d5d11684e5fe395dc2e6297230754c1ee" Jan 27 13:37:16 crc kubenswrapper[4786]: I0127 13:37:16.843011 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.020039 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.020330 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="36ad6eb3-4132-421f-b81d-4bbc6deeea83" containerName="nova-kuttl-api-log" containerID="cri-o://d258d75c219e029f49d3698972016e3e145d38b96fa8ffa10d051e26c097f1e4" gracePeriod=30 Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.020436 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="36ad6eb3-4132-421f-b81d-4bbc6deeea83" containerName="nova-kuttl-api-api" containerID="cri-o://79b46b8234357c908a65d628740e23ba3a112a998919b0d433281824d46b5de2" gracePeriod=30 Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.088954 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.089438 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="84b75d8e-eec6-4eba-9e92-6b4fd493539e" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://c9bde321a409139df282d041e1a0215a02ee9095c6fe09e7acf90a257e7c55f6" gracePeriod=30 Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.107745 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.108192 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="a751f973-3974-4eca-9df3-6d5467f11ecf" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1" gracePeriod=30 Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.108037 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="a751f973-3974-4eca-9df3-6d5467f11ecf" containerName="nova-kuttl-metadata-log" containerID="cri-o://de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c" gracePeriod=30 Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.371872 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.388877 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f44e04-2290-4929-ac72-51dd6213663d-config-data\") pod \"c5f44e04-2290-4929-ac72-51dd6213663d\" (UID: \"c5f44e04-2290-4929-ac72-51dd6213663d\") " Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.389014 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txxns\" (UniqueName: \"kubernetes.io/projected/c5f44e04-2290-4929-ac72-51dd6213663d-kube-api-access-txxns\") pod \"c5f44e04-2290-4929-ac72-51dd6213663d\" (UID: \"c5f44e04-2290-4929-ac72-51dd6213663d\") " Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.389085 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5f44e04-2290-4929-ac72-51dd6213663d-scripts\") pod \"c5f44e04-2290-4929-ac72-51dd6213663d\" (UID: \"c5f44e04-2290-4929-ac72-51dd6213663d\") " Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.396044 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f44e04-2290-4929-ac72-51dd6213663d-scripts" (OuterVolumeSpecName: "scripts") pod "c5f44e04-2290-4929-ac72-51dd6213663d" (UID: "c5f44e04-2290-4929-ac72-51dd6213663d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.402794 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f44e04-2290-4929-ac72-51dd6213663d-kube-api-access-txxns" (OuterVolumeSpecName: "kube-api-access-txxns") pod "c5f44e04-2290-4929-ac72-51dd6213663d" (UID: "c5f44e04-2290-4929-ac72-51dd6213663d"). InnerVolumeSpecName "kube-api-access-txxns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.415308 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f44e04-2290-4929-ac72-51dd6213663d-config-data" (OuterVolumeSpecName: "config-data") pod "c5f44e04-2290-4929-ac72-51dd6213663d" (UID: "c5f44e04-2290-4929-ac72-51dd6213663d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.491177 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f44e04-2290-4929-ac72-51dd6213663d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.491214 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txxns\" (UniqueName: \"kubernetes.io/projected/c5f44e04-2290-4929-ac72-51dd6213663d-kube-api-access-txxns\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.491226 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5f44e04-2290-4929-ac72-51dd6213663d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.636045 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.692668 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffhmk\" (UniqueName: \"kubernetes.io/projected/a751f973-3974-4eca-9df3-6d5467f11ecf-kube-api-access-ffhmk\") pod \"a751f973-3974-4eca-9df3-6d5467f11ecf\" (UID: \"a751f973-3974-4eca-9df3-6d5467f11ecf\") " Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.692736 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a751f973-3974-4eca-9df3-6d5467f11ecf-config-data\") pod \"a751f973-3974-4eca-9df3-6d5467f11ecf\" (UID: \"a751f973-3974-4eca-9df3-6d5467f11ecf\") " Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.692893 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a751f973-3974-4eca-9df3-6d5467f11ecf-logs\") pod \"a751f973-3974-4eca-9df3-6d5467f11ecf\" (UID: \"a751f973-3974-4eca-9df3-6d5467f11ecf\") " Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.693456 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a751f973-3974-4eca-9df3-6d5467f11ecf-logs" (OuterVolumeSpecName: "logs") pod "a751f973-3974-4eca-9df3-6d5467f11ecf" (UID: "a751f973-3974-4eca-9df3-6d5467f11ecf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.698797 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a751f973-3974-4eca-9df3-6d5467f11ecf-kube-api-access-ffhmk" (OuterVolumeSpecName: "kube-api-access-ffhmk") pod "a751f973-3974-4eca-9df3-6d5467f11ecf" (UID: "a751f973-3974-4eca-9df3-6d5467f11ecf"). InnerVolumeSpecName "kube-api-access-ffhmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.737404 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a751f973-3974-4eca-9df3-6d5467f11ecf-config-data" (OuterVolumeSpecName: "config-data") pod "a751f973-3974-4eca-9df3-6d5467f11ecf" (UID: "a751f973-3974-4eca-9df3-6d5467f11ecf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.794439 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffhmk\" (UniqueName: \"kubernetes.io/projected/a751f973-3974-4eca-9df3-6d5467f11ecf-kube-api-access-ffhmk\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.794478 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a751f973-3974-4eca-9df3-6d5467f11ecf-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.794487 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a751f973-3974-4eca-9df3-6d5467f11ecf-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.867723 4786 generic.go:334] "Generic (PLEG): container finished" podID="36ad6eb3-4132-421f-b81d-4bbc6deeea83" containerID="79b46b8234357c908a65d628740e23ba3a112a998919b0d433281824d46b5de2" exitCode=0 Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.868747 4786 generic.go:334] "Generic (PLEG): container finished" podID="36ad6eb3-4132-421f-b81d-4bbc6deeea83" containerID="d258d75c219e029f49d3698972016e3e145d38b96fa8ffa10d051e26c097f1e4" exitCode=143 Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.868051 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"36ad6eb3-4132-421f-b81d-4bbc6deeea83","Type":"ContainerDied","Data":"79b46b8234357c908a65d628740e23ba3a112a998919b0d433281824d46b5de2"} Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.869074 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"36ad6eb3-4132-421f-b81d-4bbc6deeea83","Type":"ContainerDied","Data":"d258d75c219e029f49d3698972016e3e145d38b96fa8ffa10d051e26c097f1e4"} Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.871557 4786 generic.go:334] "Generic (PLEG): container finished" podID="a751f973-3974-4eca-9df3-6d5467f11ecf" containerID="413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1" exitCode=0 Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.871586 4786 generic.go:334] "Generic (PLEG): container finished" podID="a751f973-3974-4eca-9df3-6d5467f11ecf" containerID="de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c" exitCode=143 Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.871817 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"a751f973-3974-4eca-9df3-6d5467f11ecf","Type":"ContainerDied","Data":"413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1"} Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.871846 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"a751f973-3974-4eca-9df3-6d5467f11ecf","Type":"ContainerDied","Data":"de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c"} Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.871861 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"a751f973-3974-4eca-9df3-6d5467f11ecf","Type":"ContainerDied","Data":"c10d4ed4c5ef3b785b6dc2f493adead375591312fc2d54855dd33615b9085210"} Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.871911 4786 scope.go:117] "RemoveContainer" containerID="413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.872091 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.884998 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" event={"ID":"c5f44e04-2290-4929-ac72-51dd6213663d","Type":"ContainerDied","Data":"e45e12bd74ce959d6dd278659ca24f8af392841f3501c2d00ffcf7907e992eae"} Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.885043 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e45e12bd74ce959d6dd278659ca24f8af392841f3501c2d00ffcf7907e992eae" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.885076 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.909522 4786 scope.go:117] "RemoveContainer" containerID="de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.916715 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.924296 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.934500 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:37:17 crc kubenswrapper[4786]: E0127 13:37:17.934983 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f44e04-2290-4929-ac72-51dd6213663d" containerName="nova-manage" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.935011 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f44e04-2290-4929-ac72-51dd6213663d" containerName="nova-manage" Jan 27 13:37:17 crc kubenswrapper[4786]: E0127 13:37:17.935034 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582cddc3-f46b-47c9-8aad-1b38afaf5cc0" containerName="nova-manage" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.935044 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="582cddc3-f46b-47c9-8aad-1b38afaf5cc0" containerName="nova-manage" Jan 27 13:37:17 crc kubenswrapper[4786]: E0127 13:37:17.935061 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3fad1ba-93ff-4712-bfaa-17cb808e87f4" containerName="nova-manage" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.935069 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3fad1ba-93ff-4712-bfaa-17cb808e87f4" containerName="nova-manage" Jan 27 13:37:17 crc kubenswrapper[4786]: E0127 13:37:17.935082 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a751f973-3974-4eca-9df3-6d5467f11ecf" containerName="nova-kuttl-metadata-log" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.935091 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a751f973-3974-4eca-9df3-6d5467f11ecf" containerName="nova-kuttl-metadata-log" Jan 27 13:37:17 crc kubenswrapper[4786]: E0127 13:37:17.935515 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a751f973-3974-4eca-9df3-6d5467f11ecf" containerName="nova-kuttl-metadata-metadata" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.935542 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a751f973-3974-4eca-9df3-6d5467f11ecf" containerName="nova-kuttl-metadata-metadata" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.935832 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="582cddc3-f46b-47c9-8aad-1b38afaf5cc0" containerName="nova-manage" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.935871 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a751f973-3974-4eca-9df3-6d5467f11ecf" containerName="nova-kuttl-metadata-metadata" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.935882 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f44e04-2290-4929-ac72-51dd6213663d" containerName="nova-manage" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.935895 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a751f973-3974-4eca-9df3-6d5467f11ecf" containerName="nova-kuttl-metadata-log" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.935907 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3fad1ba-93ff-4712-bfaa-17cb808e87f4" containerName="nova-manage" Jan 27 13:37:17 crc kubenswrapper[4786]: E0127 13:37:17.936486 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f44e04-2290-4929-ac72-51dd6213663d" containerName="nova-manage" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.936502 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f44e04-2290-4929-ac72-51dd6213663d" containerName="nova-manage" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.936809 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f44e04-2290-4929-ac72-51dd6213663d" containerName="nova-manage" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.939113 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.941357 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.958296 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.991942 4786 scope.go:117] "RemoveContainer" containerID="413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.993208 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:17 crc kubenswrapper[4786]: E0127 13:37:17.993551 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1\": container with ID starting with 413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1 not found: ID does not exist" containerID="413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.993584 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1"} err="failed to get container status \"413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1\": rpc error: code = NotFound desc = could not find container \"413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1\": container with ID starting with 413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1 not found: ID does not exist" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.993680 4786 scope.go:117] "RemoveContainer" containerID="de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c" Jan 27 13:37:17 crc kubenswrapper[4786]: E0127 13:37:17.994262 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c\": container with ID starting with de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c not found: ID does not exist" containerID="de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.994285 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c"} err="failed to get container status \"de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c\": rpc error: code = NotFound desc = could not find container \"de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c\": container with ID starting with de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c not found: ID does not exist" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.994298 4786 scope.go:117] "RemoveContainer" containerID="413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.994561 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1"} err="failed to get container status \"413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1\": rpc error: code = NotFound desc = could not find container \"413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1\": container with ID starting with 413f236b18cf2edb6cbe51cebd2b92eec3646c45222fe8319c359fa085df86c1 not found: ID does not exist" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.994579 4786 scope.go:117] "RemoveContainer" containerID="de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.996211 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c"} err="failed to get container status \"de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c\": rpc error: code = NotFound desc = could not find container \"de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c\": container with ID starting with de11d6090a0337f4ab47a8d88326c76000a77e09455a46c276ce793082376a4c not found: ID does not exist" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.997884 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"2942d84f-4aa3-4aa8-bec5-baf8bb43765c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.998043 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk8fx\" (UniqueName: \"kubernetes.io/projected/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-kube-api-access-bk8fx\") pod \"nova-kuttl-metadata-0\" (UID: \"2942d84f-4aa3-4aa8-bec5-baf8bb43765c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:17 crc kubenswrapper[4786]: I0127 13:37:17.998153 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"2942d84f-4aa3-4aa8-bec5-baf8bb43765c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.035767 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-sync-fqbs4"] Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.042912 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-sync-fqbs4"] Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.099085 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36ad6eb3-4132-421f-b81d-4bbc6deeea83-logs\") pod \"36ad6eb3-4132-421f-b81d-4bbc6deeea83\" (UID: \"36ad6eb3-4132-421f-b81d-4bbc6deeea83\") " Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.099338 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7prcp\" (UniqueName: \"kubernetes.io/projected/36ad6eb3-4132-421f-b81d-4bbc6deeea83-kube-api-access-7prcp\") pod \"36ad6eb3-4132-421f-b81d-4bbc6deeea83\" (UID: \"36ad6eb3-4132-421f-b81d-4bbc6deeea83\") " Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.099380 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ad6eb3-4132-421f-b81d-4bbc6deeea83-config-data\") pod \"36ad6eb3-4132-421f-b81d-4bbc6deeea83\" (UID: \"36ad6eb3-4132-421f-b81d-4bbc6deeea83\") " Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.099494 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36ad6eb3-4132-421f-b81d-4bbc6deeea83-logs" (OuterVolumeSpecName: "logs") pod "36ad6eb3-4132-421f-b81d-4bbc6deeea83" (UID: "36ad6eb3-4132-421f-b81d-4bbc6deeea83"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.099643 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"2942d84f-4aa3-4aa8-bec5-baf8bb43765c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.099703 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk8fx\" (UniqueName: \"kubernetes.io/projected/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-kube-api-access-bk8fx\") pod \"nova-kuttl-metadata-0\" (UID: \"2942d84f-4aa3-4aa8-bec5-baf8bb43765c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.099742 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"2942d84f-4aa3-4aa8-bec5-baf8bb43765c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.099864 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36ad6eb3-4132-421f-b81d-4bbc6deeea83-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.100037 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"2942d84f-4aa3-4aa8-bec5-baf8bb43765c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.102495 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36ad6eb3-4132-421f-b81d-4bbc6deeea83-kube-api-access-7prcp" (OuterVolumeSpecName: "kube-api-access-7prcp") pod "36ad6eb3-4132-421f-b81d-4bbc6deeea83" (UID: "36ad6eb3-4132-421f-b81d-4bbc6deeea83"). InnerVolumeSpecName "kube-api-access-7prcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.103080 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"2942d84f-4aa3-4aa8-bec5-baf8bb43765c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.117395 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk8fx\" (UniqueName: \"kubernetes.io/projected/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-kube-api-access-bk8fx\") pod \"nova-kuttl-metadata-0\" (UID: \"2942d84f-4aa3-4aa8-bec5-baf8bb43765c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.124396 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36ad6eb3-4132-421f-b81d-4bbc6deeea83-config-data" (OuterVolumeSpecName: "config-data") pod "36ad6eb3-4132-421f-b81d-4bbc6deeea83" (UID: "36ad6eb3-4132-421f-b81d-4bbc6deeea83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.201513 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7prcp\" (UniqueName: \"kubernetes.io/projected/36ad6eb3-4132-421f-b81d-4bbc6deeea83-kube-api-access-7prcp\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.201556 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ad6eb3-4132-421f-b81d-4bbc6deeea83-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.312868 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.734466 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:37:18 crc kubenswrapper[4786]: W0127 13:37:18.743756 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2942d84f_4aa3_4aa8_bec5_baf8bb43765c.slice/crio-1bca17d079d725f15fdb7f654da41537a79b878cff66db5229a3cb14f0228335 WatchSource:0}: Error finding container 1bca17d079d725f15fdb7f654da41537a79b878cff66db5229a3cb14f0228335: Status 404 returned error can't find the container with id 1bca17d079d725f15fdb7f654da41537a79b878cff66db5229a3cb14f0228335 Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.894862 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2942d84f-4aa3-4aa8-bec5-baf8bb43765c","Type":"ContainerStarted","Data":"1bca17d079d725f15fdb7f654da41537a79b878cff66db5229a3cb14f0228335"} Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.898113 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"36ad6eb3-4132-421f-b81d-4bbc6deeea83","Type":"ContainerDied","Data":"f6fa53d30bffa1afff4d6ce8ecd830120aaa50cbeb15dc5872064d9e96d0224d"} Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.898158 4786 scope.go:117] "RemoveContainer" containerID="79b46b8234357c908a65d628740e23ba3a112a998919b0d433281824d46b5de2" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.898299 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.937884 4786 scope.go:117] "RemoveContainer" containerID="d258d75c219e029f49d3698972016e3e145d38b96fa8ffa10d051e26c097f1e4" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.963842 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.974810 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.982776 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:37:18 crc kubenswrapper[4786]: E0127 13:37:18.983294 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ad6eb3-4132-421f-b81d-4bbc6deeea83" containerName="nova-kuttl-api-log" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.983385 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ad6eb3-4132-421f-b81d-4bbc6deeea83" containerName="nova-kuttl-api-log" Jan 27 13:37:18 crc kubenswrapper[4786]: E0127 13:37:18.983464 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ad6eb3-4132-421f-b81d-4bbc6deeea83" containerName="nova-kuttl-api-api" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.983542 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ad6eb3-4132-421f-b81d-4bbc6deeea83" containerName="nova-kuttl-api-api" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.983834 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="36ad6eb3-4132-421f-b81d-4bbc6deeea83" containerName="nova-kuttl-api-log" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.983921 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="36ad6eb3-4132-421f-b81d-4bbc6deeea83" containerName="nova-kuttl-api-api" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.985038 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.992150 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 27 13:37:18 crc kubenswrapper[4786]: I0127 13:37:18.995455 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.015243 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5565ba6-24db-449e-b480-939133f9848f-logs\") pod \"nova-kuttl-api-0\" (UID: \"c5565ba6-24db-449e-b480-939133f9848f\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.015345 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5565ba6-24db-449e-b480-939133f9848f-config-data\") pod \"nova-kuttl-api-0\" (UID: \"c5565ba6-24db-449e-b480-939133f9848f\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.015365 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5kfq\" (UniqueName: \"kubernetes.io/projected/c5565ba6-24db-449e-b480-939133f9848f-kube-api-access-x5kfq\") pod \"nova-kuttl-api-0\" (UID: \"c5565ba6-24db-449e-b480-939133f9848f\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.117583 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5565ba6-24db-449e-b480-939133f9848f-logs\") pod \"nova-kuttl-api-0\" (UID: \"c5565ba6-24db-449e-b480-939133f9848f\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.117767 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5565ba6-24db-449e-b480-939133f9848f-config-data\") pod \"nova-kuttl-api-0\" (UID: \"c5565ba6-24db-449e-b480-939133f9848f\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.117805 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5kfq\" (UniqueName: \"kubernetes.io/projected/c5565ba6-24db-449e-b480-939133f9848f-kube-api-access-x5kfq\") pod \"nova-kuttl-api-0\" (UID: \"c5565ba6-24db-449e-b480-939133f9848f\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.118423 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5565ba6-24db-449e-b480-939133f9848f-logs\") pod \"nova-kuttl-api-0\" (UID: \"c5565ba6-24db-449e-b480-939133f9848f\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.121975 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5565ba6-24db-449e-b480-939133f9848f-config-data\") pod \"nova-kuttl-api-0\" (UID: \"c5565ba6-24db-449e-b480-939133f9848f\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.135656 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5kfq\" (UniqueName: \"kubernetes.io/projected/c5565ba6-24db-449e-b480-939133f9848f-kube-api-access-x5kfq\") pod \"nova-kuttl-api-0\" (UID: \"c5565ba6-24db-449e-b480-939133f9848f\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.306998 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.476400 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36ad6eb3-4132-421f-b81d-4bbc6deeea83" path="/var/lib/kubelet/pods/36ad6eb3-4132-421f-b81d-4bbc6deeea83/volumes" Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.477733 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78638470-5811-4d8d-9600-aa15f9e5baed" path="/var/lib/kubelet/pods/78638470-5811-4d8d-9600-aa15f9e5baed/volumes" Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.478308 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a751f973-3974-4eca-9df3-6d5467f11ecf" path="/var/lib/kubelet/pods/a751f973-3974-4eca-9df3-6d5467f11ecf/volumes" Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.734289 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:37:19 crc kubenswrapper[4786]: W0127 13:37:19.736701 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5565ba6_24db_449e_b480_939133f9848f.slice/crio-1d91535d83c507a1685553695491a654aeb60b22b05b45486522495eed2ef658 WatchSource:0}: Error finding container 1d91535d83c507a1685553695491a654aeb60b22b05b45486522495eed2ef658: Status 404 returned error can't find the container with id 1d91535d83c507a1685553695491a654aeb60b22b05b45486522495eed2ef658 Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.911528 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2942d84f-4aa3-4aa8-bec5-baf8bb43765c","Type":"ContainerStarted","Data":"0ba2be3e8bc554522c9b257099bb1983905bb3c9d18210949c86baf3c51be49b"} Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.911581 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2942d84f-4aa3-4aa8-bec5-baf8bb43765c","Type":"ContainerStarted","Data":"d8d0ad3f35046e9b662e47aba8749d406208143400c1c05577f7eb946550ca3c"} Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.914908 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"c5565ba6-24db-449e-b480-939133f9848f","Type":"ContainerStarted","Data":"1d91535d83c507a1685553695491a654aeb60b22b05b45486522495eed2ef658"} Jan 27 13:37:19 crc kubenswrapper[4786]: I0127 13:37:19.935083 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.935053716 podStartE2EDuration="2.935053716s" podCreationTimestamp="2026-01-27 13:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:37:19.929340319 +0000 UTC m=+1823.139954458" watchObservedRunningTime="2026-01-27 13:37:19.935053716 +0000 UTC m=+1823.145667835" Jan 27 13:37:20 crc kubenswrapper[4786]: I0127 13:37:20.926262 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"c5565ba6-24db-449e-b480-939133f9848f","Type":"ContainerStarted","Data":"e745c5cf9f468ed80350d57344f8d7471e7c9d7f631645dcfe93d3523f9e8daf"} Jan 27 13:37:20 crc kubenswrapper[4786]: I0127 13:37:20.927162 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"c5565ba6-24db-449e-b480-939133f9848f","Type":"ContainerStarted","Data":"fe611adf5e7e8af4eeb6f04c0923f019f55e5710efb57e7d31e4ce1f3dc6b951"} Jan 27 13:37:20 crc kubenswrapper[4786]: I0127 13:37:20.947955 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.947938046 podStartE2EDuration="2.947938046s" podCreationTimestamp="2026-01-27 13:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:37:20.943183716 +0000 UTC m=+1824.153797835" watchObservedRunningTime="2026-01-27 13:37:20.947938046 +0000 UTC m=+1824.158552165" Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.465277 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:37:21 crc kubenswrapper[4786]: E0127 13:37:21.465777 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.496759 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.571886 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b75d8e-eec6-4eba-9e92-6b4fd493539e-config-data\") pod \"84b75d8e-eec6-4eba-9e92-6b4fd493539e\" (UID: \"84b75d8e-eec6-4eba-9e92-6b4fd493539e\") " Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.572033 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z82zm\" (UniqueName: \"kubernetes.io/projected/84b75d8e-eec6-4eba-9e92-6b4fd493539e-kube-api-access-z82zm\") pod \"84b75d8e-eec6-4eba-9e92-6b4fd493539e\" (UID: \"84b75d8e-eec6-4eba-9e92-6b4fd493539e\") " Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.584807 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84b75d8e-eec6-4eba-9e92-6b4fd493539e-kube-api-access-z82zm" (OuterVolumeSpecName: "kube-api-access-z82zm") pod "84b75d8e-eec6-4eba-9e92-6b4fd493539e" (UID: "84b75d8e-eec6-4eba-9e92-6b4fd493539e"). InnerVolumeSpecName "kube-api-access-z82zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.596135 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84b75d8e-eec6-4eba-9e92-6b4fd493539e-config-data" (OuterVolumeSpecName: "config-data") pod "84b75d8e-eec6-4eba-9e92-6b4fd493539e" (UID: "84b75d8e-eec6-4eba-9e92-6b4fd493539e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.673848 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z82zm\" (UniqueName: \"kubernetes.io/projected/84b75d8e-eec6-4eba-9e92-6b4fd493539e-kube-api-access-z82zm\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.673891 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b75d8e-eec6-4eba-9e92-6b4fd493539e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.940658 4786 generic.go:334] "Generic (PLEG): container finished" podID="84b75d8e-eec6-4eba-9e92-6b4fd493539e" containerID="c9bde321a409139df282d041e1a0215a02ee9095c6fe09e7acf90a257e7c55f6" exitCode=0 Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.940729 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.940743 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"84b75d8e-eec6-4eba-9e92-6b4fd493539e","Type":"ContainerDied","Data":"c9bde321a409139df282d041e1a0215a02ee9095c6fe09e7acf90a257e7c55f6"} Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.941869 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"84b75d8e-eec6-4eba-9e92-6b4fd493539e","Type":"ContainerDied","Data":"db8d0b84f1adcce0002c0c20abb258934b9da506fa5f98ce87f223646671a489"} Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.941902 4786 scope.go:117] "RemoveContainer" containerID="c9bde321a409139df282d041e1a0215a02ee9095c6fe09e7acf90a257e7c55f6" Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.966870 4786 scope.go:117] "RemoveContainer" containerID="c9bde321a409139df282d041e1a0215a02ee9095c6fe09e7acf90a257e7c55f6" Jan 27 13:37:21 crc kubenswrapper[4786]: E0127 13:37:21.969823 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9bde321a409139df282d041e1a0215a02ee9095c6fe09e7acf90a257e7c55f6\": container with ID starting with c9bde321a409139df282d041e1a0215a02ee9095c6fe09e7acf90a257e7c55f6 not found: ID does not exist" containerID="c9bde321a409139df282d041e1a0215a02ee9095c6fe09e7acf90a257e7c55f6" Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.969870 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9bde321a409139df282d041e1a0215a02ee9095c6fe09e7acf90a257e7c55f6"} err="failed to get container status \"c9bde321a409139df282d041e1a0215a02ee9095c6fe09e7acf90a257e7c55f6\": rpc error: code = NotFound desc = could not find container \"c9bde321a409139df282d041e1a0215a02ee9095c6fe09e7acf90a257e7c55f6\": container with ID starting with c9bde321a409139df282d041e1a0215a02ee9095c6fe09e7acf90a257e7c55f6 not found: ID does not exist" Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.976085 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.983312 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.995289 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:37:21 crc kubenswrapper[4786]: E0127 13:37:21.995734 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b75d8e-eec6-4eba-9e92-6b4fd493539e" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.995759 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b75d8e-eec6-4eba-9e92-6b4fd493539e" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.996282 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="84b75d8e-eec6-4eba-9e92-6b4fd493539e" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.996936 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:21 crc kubenswrapper[4786]: I0127 13:37:21.999663 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 27 13:37:22 crc kubenswrapper[4786]: I0127 13:37:22.015676 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:37:22 crc kubenswrapper[4786]: I0127 13:37:22.080908 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccf56c3f-31e2-41f8-82c0-9302d280efe0-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"ccf56c3f-31e2-41f8-82c0-9302d280efe0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:22 crc kubenswrapper[4786]: I0127 13:37:22.081249 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjxvj\" (UniqueName: \"kubernetes.io/projected/ccf56c3f-31e2-41f8-82c0-9302d280efe0-kube-api-access-hjxvj\") pod \"nova-kuttl-scheduler-0\" (UID: \"ccf56c3f-31e2-41f8-82c0-9302d280efe0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:22 crc kubenswrapper[4786]: I0127 13:37:22.182387 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccf56c3f-31e2-41f8-82c0-9302d280efe0-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"ccf56c3f-31e2-41f8-82c0-9302d280efe0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:22 crc kubenswrapper[4786]: I0127 13:37:22.182432 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjxvj\" (UniqueName: \"kubernetes.io/projected/ccf56c3f-31e2-41f8-82c0-9302d280efe0-kube-api-access-hjxvj\") pod \"nova-kuttl-scheduler-0\" (UID: \"ccf56c3f-31e2-41f8-82c0-9302d280efe0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:22 crc kubenswrapper[4786]: I0127 13:37:22.202735 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccf56c3f-31e2-41f8-82c0-9302d280efe0-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"ccf56c3f-31e2-41f8-82c0-9302d280efe0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:22 crc kubenswrapper[4786]: I0127 13:37:22.209559 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjxvj\" (UniqueName: \"kubernetes.io/projected/ccf56c3f-31e2-41f8-82c0-9302d280efe0-kube-api-access-hjxvj\") pod \"nova-kuttl-scheduler-0\" (UID: \"ccf56c3f-31e2-41f8-82c0-9302d280efe0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:22 crc kubenswrapper[4786]: I0127 13:37:22.310145 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:22 crc kubenswrapper[4786]: I0127 13:37:22.728586 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:37:22 crc kubenswrapper[4786]: W0127 13:37:22.733380 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccf56c3f_31e2_41f8_82c0_9302d280efe0.slice/crio-3052578b220b54910e0cba0a5d9ecdd487c57051267f59ca869ef2287176c5a4 WatchSource:0}: Error finding container 3052578b220b54910e0cba0a5d9ecdd487c57051267f59ca869ef2287176c5a4: Status 404 returned error can't find the container with id 3052578b220b54910e0cba0a5d9ecdd487c57051267f59ca869ef2287176c5a4 Jan 27 13:37:22 crc kubenswrapper[4786]: I0127 13:37:22.951987 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"ccf56c3f-31e2-41f8-82c0-9302d280efe0","Type":"ContainerStarted","Data":"3052578b220b54910e0cba0a5d9ecdd487c57051267f59ca869ef2287176c5a4"} Jan 27 13:37:23 crc kubenswrapper[4786]: I0127 13:37:23.048216 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-ghf44"] Jan 27 13:37:23 crc kubenswrapper[4786]: I0127 13:37:23.055075 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-ghf44"] Jan 27 13:37:23 crc kubenswrapper[4786]: I0127 13:37:23.313082 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:23 crc kubenswrapper[4786]: I0127 13:37:23.313131 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:23 crc kubenswrapper[4786]: I0127 13:37:23.475479 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84b75d8e-eec6-4eba-9e92-6b4fd493539e" path="/var/lib/kubelet/pods/84b75d8e-eec6-4eba-9e92-6b4fd493539e/volumes" Jan 27 13:37:23 crc kubenswrapper[4786]: I0127 13:37:23.476709 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be1e59cf-7674-4a16-be0e-bd08a540a304" path="/var/lib/kubelet/pods/be1e59cf-7674-4a16-be0e-bd08a540a304/volumes" Jan 27 13:37:28 crc kubenswrapper[4786]: I0127 13:37:28.011807 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"ccf56c3f-31e2-41f8-82c0-9302d280efe0","Type":"ContainerStarted","Data":"6eb12ce04ec68848477f89c1e4dc244f29635a30c004039cb8b54d7e3416e2b0"} Jan 27 13:37:28 crc kubenswrapper[4786]: I0127 13:37:28.034587 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=7.034566705 podStartE2EDuration="7.034566705s" podCreationTimestamp="2026-01-27 13:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:37:28.028930941 +0000 UTC m=+1831.239545080" watchObservedRunningTime="2026-01-27 13:37:28.034566705 +0000 UTC m=+1831.245180824" Jan 27 13:37:28 crc kubenswrapper[4786]: I0127 13:37:28.313086 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:28 crc kubenswrapper[4786]: I0127 13:37:28.313241 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:29 crc kubenswrapper[4786]: I0127 13:37:29.307681 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:29 crc kubenswrapper[4786]: I0127 13:37:29.308826 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:29 crc kubenswrapper[4786]: I0127 13:37:29.395829 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="2942d84f-4aa3-4aa8-bec5-baf8bb43765c" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.200:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:37:29 crc kubenswrapper[4786]: I0127 13:37:29.396010 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="2942d84f-4aa3-4aa8-bec5-baf8bb43765c" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.200:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:37:29 crc kubenswrapper[4786]: E0127 13:37:29.762040 4786 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.5:58744->38.102.83.5:38075: write tcp 38.102.83.5:58744->38.102.83.5:38075: write: connection reset by peer Jan 27 13:37:30 crc kubenswrapper[4786]: I0127 13:37:30.389912 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="c5565ba6-24db-449e-b480-939133f9848f" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:37:30 crc kubenswrapper[4786]: I0127 13:37:30.390284 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="c5565ba6-24db-449e-b480-939133f9848f" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:37:32 crc kubenswrapper[4786]: I0127 13:37:32.310805 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:32 crc kubenswrapper[4786]: I0127 13:37:32.311129 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:32 crc kubenswrapper[4786]: I0127 13:37:32.333532 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:33 crc kubenswrapper[4786]: I0127 13:37:33.083095 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:36 crc kubenswrapper[4786]: I0127 13:37:36.465443 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:37:36 crc kubenswrapper[4786]: E0127 13:37:36.466277 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:37:38 crc kubenswrapper[4786]: I0127 13:37:38.316986 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:38 crc kubenswrapper[4786]: I0127 13:37:38.319382 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:38 crc kubenswrapper[4786]: I0127 13:37:38.319787 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:39 crc kubenswrapper[4786]: I0127 13:37:39.112037 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:37:39 crc kubenswrapper[4786]: I0127 13:37:39.311884 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:39 crc kubenswrapper[4786]: I0127 13:37:39.311944 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:39 crc kubenswrapper[4786]: I0127 13:37:39.312672 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:39 crc kubenswrapper[4786]: I0127 13:37:39.312696 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:39 crc kubenswrapper[4786]: I0127 13:37:39.315557 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:39 crc kubenswrapper[4786]: I0127 13:37:39.315615 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:37:51 crc kubenswrapper[4786]: I0127 13:37:51.465091 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:37:51 crc kubenswrapper[4786]: E0127 13:37:51.465801 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:37:58 crc kubenswrapper[4786]: I0127 13:37:58.583230 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:37:58 crc kubenswrapper[4786]: I0127 13:37:58.584433 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="ade06765-40a5-4b74-a4fb-89726ae6c9d8" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://a25ca61f1a71b745764506d5e7439363114124b732141b028488fbb2ea11b7f1" gracePeriod=30 Jan 27 13:37:58 crc kubenswrapper[4786]: I0127 13:37:58.601088 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 27 13:37:58 crc kubenswrapper[4786]: I0127 13:37:58.601567 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="b59b5bdb-15fd-4a34-bdcc-1b64a3795f10" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" containerID="cri-o://e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330" gracePeriod=30 Jan 27 13:37:58 crc kubenswrapper[4786]: I0127 13:37:58.679371 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:37:58 crc kubenswrapper[4786]: I0127 13:37:58.679586 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="ccf56c3f-31e2-41f8-82c0-9302d280efe0" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://6eb12ce04ec68848477f89c1e4dc244f29635a30c004039cb8b54d7e3416e2b0" gracePeriod=30 Jan 27 13:37:58 crc kubenswrapper[4786]: I0127 13:37:58.775939 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:37:58 crc kubenswrapper[4786]: I0127 13:37:58.776193 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="b02f7325-70bc-4165-b5a4-1b5d75bd397e" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://7bf87927d952736571c7a217ec69caf0056e64bc00ae532d6f71262bc5401133" gracePeriod=30 Jan 27 13:37:58 crc kubenswrapper[4786]: I0127 13:37:58.789072 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:37:58 crc kubenswrapper[4786]: I0127 13:37:58.789299 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="c5565ba6-24db-449e-b480-939133f9848f" containerName="nova-kuttl-api-log" containerID="cri-o://e745c5cf9f468ed80350d57344f8d7471e7c9d7f631645dcfe93d3523f9e8daf" gracePeriod=30 Jan 27 13:37:58 crc kubenswrapper[4786]: I0127 13:37:58.789398 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="c5565ba6-24db-449e-b480-939133f9848f" containerName="nova-kuttl-api-api" containerID="cri-o://fe611adf5e7e8af4eeb6f04c0923f019f55e5710efb57e7d31e4ce1f3dc6b951" gracePeriod=30 Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.267192 4786 generic.go:334] "Generic (PLEG): container finished" podID="c5565ba6-24db-449e-b480-939133f9848f" containerID="e745c5cf9f468ed80350d57344f8d7471e7c9d7f631645dcfe93d3523f9e8daf" exitCode=143 Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.267230 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"c5565ba6-24db-449e-b480-939133f9848f","Type":"ContainerDied","Data":"e745c5cf9f468ed80350d57344f8d7471e7c9d7f631645dcfe93d3523f9e8daf"} Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.497869 4786 scope.go:117] "RemoveContainer" containerID="edaa212c1fe3b94e8eec53e84808cad2829913db09ca5e4f630433b24972d1b6" Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.534637 4786 scope.go:117] "RemoveContainer" containerID="d54488c1b2890687f7d5a4af4db34e6f28dc654daf8883bce7c8edac84e2ed23" Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.577505 4786 scope.go:117] "RemoveContainer" containerID="7119c06e61ccf463fe2783cf6be16c7f81ef95b684bbc210374ec440d95ead92" Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.648499 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.711693 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.782152 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b02f7325-70bc-4165-b5a4-1b5d75bd397e-config-data\") pod \"b02f7325-70bc-4165-b5a4-1b5d75bd397e\" (UID: \"b02f7325-70bc-4165-b5a4-1b5d75bd397e\") " Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.782319 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztw2q\" (UniqueName: \"kubernetes.io/projected/b02f7325-70bc-4165-b5a4-1b5d75bd397e-kube-api-access-ztw2q\") pod \"b02f7325-70bc-4165-b5a4-1b5d75bd397e\" (UID: \"b02f7325-70bc-4165-b5a4-1b5d75bd397e\") " Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.787161 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b02f7325-70bc-4165-b5a4-1b5d75bd397e-kube-api-access-ztw2q" (OuterVolumeSpecName: "kube-api-access-ztw2q") pod "b02f7325-70bc-4165-b5a4-1b5d75bd397e" (UID: "b02f7325-70bc-4165-b5a4-1b5d75bd397e"). InnerVolumeSpecName "kube-api-access-ztw2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.802377 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b02f7325-70bc-4165-b5a4-1b5d75bd397e-config-data" (OuterVolumeSpecName: "config-data") pod "b02f7325-70bc-4165-b5a4-1b5d75bd397e" (UID: "b02f7325-70bc-4165-b5a4-1b5d75bd397e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.883182 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjxvj\" (UniqueName: \"kubernetes.io/projected/ccf56c3f-31e2-41f8-82c0-9302d280efe0-kube-api-access-hjxvj\") pod \"ccf56c3f-31e2-41f8-82c0-9302d280efe0\" (UID: \"ccf56c3f-31e2-41f8-82c0-9302d280efe0\") " Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.883348 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccf56c3f-31e2-41f8-82c0-9302d280efe0-config-data\") pod \"ccf56c3f-31e2-41f8-82c0-9302d280efe0\" (UID: \"ccf56c3f-31e2-41f8-82c0-9302d280efe0\") " Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.883744 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b02f7325-70bc-4165-b5a4-1b5d75bd397e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.883767 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztw2q\" (UniqueName: \"kubernetes.io/projected/b02f7325-70bc-4165-b5a4-1b5d75bd397e-kube-api-access-ztw2q\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.886895 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccf56c3f-31e2-41f8-82c0-9302d280efe0-kube-api-access-hjxvj" (OuterVolumeSpecName: "kube-api-access-hjxvj") pod "ccf56c3f-31e2-41f8-82c0-9302d280efe0" (UID: "ccf56c3f-31e2-41f8-82c0-9302d280efe0"). InnerVolumeSpecName "kube-api-access-hjxvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.905441 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccf56c3f-31e2-41f8-82c0-9302d280efe0-config-data" (OuterVolumeSpecName: "config-data") pod "ccf56c3f-31e2-41f8-82c0-9302d280efe0" (UID: "ccf56c3f-31e2-41f8-82c0-9302d280efe0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.985371 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccf56c3f-31e2-41f8-82c0-9302d280efe0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:37:59 crc kubenswrapper[4786]: I0127 13:37:59.985408 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjxvj\" (UniqueName: \"kubernetes.io/projected/ccf56c3f-31e2-41f8-82c0-9302d280efe0-kube-api-access-hjxvj\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.279371 4786 generic.go:334] "Generic (PLEG): container finished" podID="b02f7325-70bc-4165-b5a4-1b5d75bd397e" containerID="7bf87927d952736571c7a217ec69caf0056e64bc00ae532d6f71262bc5401133" exitCode=0 Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.279427 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.279411 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"b02f7325-70bc-4165-b5a4-1b5d75bd397e","Type":"ContainerDied","Data":"7bf87927d952736571c7a217ec69caf0056e64bc00ae532d6f71262bc5401133"} Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.279515 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"b02f7325-70bc-4165-b5a4-1b5d75bd397e","Type":"ContainerDied","Data":"a064cca09c40a60313a91a8e9e771d247c9424f3be013bc91784e302668cbe4c"} Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.279582 4786 scope.go:117] "RemoveContainer" containerID="7bf87927d952736571c7a217ec69caf0056e64bc00ae532d6f71262bc5401133" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.281457 4786 generic.go:334] "Generic (PLEG): container finished" podID="ccf56c3f-31e2-41f8-82c0-9302d280efe0" containerID="6eb12ce04ec68848477f89c1e4dc244f29635a30c004039cb8b54d7e3416e2b0" exitCode=0 Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.281498 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"ccf56c3f-31e2-41f8-82c0-9302d280efe0","Type":"ContainerDied","Data":"6eb12ce04ec68848477f89c1e4dc244f29635a30c004039cb8b54d7e3416e2b0"} Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.281529 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"ccf56c3f-31e2-41f8-82c0-9302d280efe0","Type":"ContainerDied","Data":"3052578b220b54910e0cba0a5d9ecdd487c57051267f59ca869ef2287176c5a4"} Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.281640 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.302120 4786 scope.go:117] "RemoveContainer" containerID="7bf87927d952736571c7a217ec69caf0056e64bc00ae532d6f71262bc5401133" Jan 27 13:38:00 crc kubenswrapper[4786]: E0127 13:38:00.302598 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bf87927d952736571c7a217ec69caf0056e64bc00ae532d6f71262bc5401133\": container with ID starting with 7bf87927d952736571c7a217ec69caf0056e64bc00ae532d6f71262bc5401133 not found: ID does not exist" containerID="7bf87927d952736571c7a217ec69caf0056e64bc00ae532d6f71262bc5401133" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.302678 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf87927d952736571c7a217ec69caf0056e64bc00ae532d6f71262bc5401133"} err="failed to get container status \"7bf87927d952736571c7a217ec69caf0056e64bc00ae532d6f71262bc5401133\": rpc error: code = NotFound desc = could not find container \"7bf87927d952736571c7a217ec69caf0056e64bc00ae532d6f71262bc5401133\": container with ID starting with 7bf87927d952736571c7a217ec69caf0056e64bc00ae532d6f71262bc5401133 not found: ID does not exist" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.302700 4786 scope.go:117] "RemoveContainer" containerID="6eb12ce04ec68848477f89c1e4dc244f29635a30c004039cb8b54d7e3416e2b0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.313771 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.334785 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.336080 4786 scope.go:117] "RemoveContainer" containerID="6eb12ce04ec68848477f89c1e4dc244f29635a30c004039cb8b54d7e3416e2b0" Jan 27 13:38:00 crc kubenswrapper[4786]: E0127 13:38:00.337413 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eb12ce04ec68848477f89c1e4dc244f29635a30c004039cb8b54d7e3416e2b0\": container with ID starting with 6eb12ce04ec68848477f89c1e4dc244f29635a30c004039cb8b54d7e3416e2b0 not found: ID does not exist" containerID="6eb12ce04ec68848477f89c1e4dc244f29635a30c004039cb8b54d7e3416e2b0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.337598 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eb12ce04ec68848477f89c1e4dc244f29635a30c004039cb8b54d7e3416e2b0"} err="failed to get container status \"6eb12ce04ec68848477f89c1e4dc244f29635a30c004039cb8b54d7e3416e2b0\": rpc error: code = NotFound desc = could not find container \"6eb12ce04ec68848477f89c1e4dc244f29635a30c004039cb8b54d7e3416e2b0\": container with ID starting with 6eb12ce04ec68848477f89c1e4dc244f29635a30c004039cb8b54d7e3416e2b0 not found: ID does not exist" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.341275 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.347227 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.354421 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:38:00 crc kubenswrapper[4786]: E0127 13:38:00.354865 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b02f7325-70bc-4165-b5a4-1b5d75bd397e" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.354892 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b02f7325-70bc-4165-b5a4-1b5d75bd397e" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:38:00 crc kubenswrapper[4786]: E0127 13:38:00.354917 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccf56c3f-31e2-41f8-82c0-9302d280efe0" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.354926 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccf56c3f-31e2-41f8-82c0-9302d280efe0" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.355071 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccf56c3f-31e2-41f8-82c0-9302d280efe0" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.355092 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b02f7325-70bc-4165-b5a4-1b5d75bd397e" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.355802 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.357814 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.364830 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:38:00 crc kubenswrapper[4786]: E0127 13:38:00.365375 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.365907 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:38:00 crc kubenswrapper[4786]: E0127 13:38:00.367621 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:00 crc kubenswrapper[4786]: E0127 13:38:00.369935 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:00 crc kubenswrapper[4786]: E0127 13:38:00.369993 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="b59b5bdb-15fd-4a34-bdcc-1b64a3795f10" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.371103 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.371186 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.379978 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.497485 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/718480bb-da0c-488e-a922-86681c5516a4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"718480bb-da0c-488e-a922-86681c5516a4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.497787 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cklj\" (UniqueName: \"kubernetes.io/projected/718480bb-da0c-488e-a922-86681c5516a4-kube-api-access-9cklj\") pod \"nova-kuttl-scheduler-0\" (UID: \"718480bb-da0c-488e-a922-86681c5516a4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.497857 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914b9838-f13a-4f25-aaed-3cea9132ce49-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"914b9838-f13a-4f25-aaed-3cea9132ce49\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.497906 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb9px\" (UniqueName: \"kubernetes.io/projected/914b9838-f13a-4f25-aaed-3cea9132ce49-kube-api-access-qb9px\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"914b9838-f13a-4f25-aaed-3cea9132ce49\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.599860 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb9px\" (UniqueName: \"kubernetes.io/projected/914b9838-f13a-4f25-aaed-3cea9132ce49-kube-api-access-qb9px\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"914b9838-f13a-4f25-aaed-3cea9132ce49\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.599960 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/718480bb-da0c-488e-a922-86681c5516a4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"718480bb-da0c-488e-a922-86681c5516a4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.599988 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cklj\" (UniqueName: \"kubernetes.io/projected/718480bb-da0c-488e-a922-86681c5516a4-kube-api-access-9cklj\") pod \"nova-kuttl-scheduler-0\" (UID: \"718480bb-da0c-488e-a922-86681c5516a4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.600051 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914b9838-f13a-4f25-aaed-3cea9132ce49-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"914b9838-f13a-4f25-aaed-3cea9132ce49\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.606556 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/718480bb-da0c-488e-a922-86681c5516a4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"718480bb-da0c-488e-a922-86681c5516a4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.606678 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914b9838-f13a-4f25-aaed-3cea9132ce49-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"914b9838-f13a-4f25-aaed-3cea9132ce49\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.622734 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb9px\" (UniqueName: \"kubernetes.io/projected/914b9838-f13a-4f25-aaed-3cea9132ce49-kube-api-access-qb9px\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"914b9838-f13a-4f25-aaed-3cea9132ce49\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.622771 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cklj\" (UniqueName: \"kubernetes.io/projected/718480bb-da0c-488e-a922-86681c5516a4-kube-api-access-9cklj\") pod \"nova-kuttl-scheduler-0\" (UID: \"718480bb-da0c-488e-a922-86681c5516a4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.675827 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:38:00 crc kubenswrapper[4786]: I0127 13:38:00.690497 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:38:01 crc kubenswrapper[4786]: I0127 13:38:01.111097 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:38:01 crc kubenswrapper[4786]: I0127 13:38:01.168941 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:38:01 crc kubenswrapper[4786]: I0127 13:38:01.292013 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"914b9838-f13a-4f25-aaed-3cea9132ce49","Type":"ContainerStarted","Data":"c43c42090ff1e1bc570d843903d2eda5ae5bdc1e600bb896d18d2047b9e82528"} Jan 27 13:38:01 crc kubenswrapper[4786]: I0127 13:38:01.296138 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"718480bb-da0c-488e-a922-86681c5516a4","Type":"ContainerStarted","Data":"d53844abe3f78e17fa9568c9407a24ad131151d78f38c0af6973ef64d011d5f2"} Jan 27 13:38:01 crc kubenswrapper[4786]: I0127 13:38:01.473756 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b02f7325-70bc-4165-b5a4-1b5d75bd397e" path="/var/lib/kubelet/pods/b02f7325-70bc-4165-b5a4-1b5d75bd397e/volumes" Jan 27 13:38:01 crc kubenswrapper[4786]: I0127 13:38:01.474400 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccf56c3f-31e2-41f8-82c0-9302d280efe0" path="/var/lib/kubelet/pods/ccf56c3f-31e2-41f8-82c0-9302d280efe0/volumes" Jan 27 13:38:02 crc kubenswrapper[4786]: E0127 13:38:02.229895 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a25ca61f1a71b745764506d5e7439363114124b732141b028488fbb2ea11b7f1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:38:02 crc kubenswrapper[4786]: E0127 13:38:02.238645 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a25ca61f1a71b745764506d5e7439363114124b732141b028488fbb2ea11b7f1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:38:02 crc kubenswrapper[4786]: E0127 13:38:02.239852 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a25ca61f1a71b745764506d5e7439363114124b732141b028488fbb2ea11b7f1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:38:02 crc kubenswrapper[4786]: E0127 13:38:02.239892 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="ade06765-40a5-4b74-a4fb-89726ae6c9d8" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.306318 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"914b9838-f13a-4f25-aaed-3cea9132ce49","Type":"ContainerStarted","Data":"d428eded70ad74b7bcf7fd1951f0b472ea643870ab4e0cc754b14502075094d1"} Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.307524 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.309649 4786 generic.go:334] "Generic (PLEG): container finished" podID="c5565ba6-24db-449e-b480-939133f9848f" containerID="fe611adf5e7e8af4eeb6f04c0923f019f55e5710efb57e7d31e4ce1f3dc6b951" exitCode=0 Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.309701 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"c5565ba6-24db-449e-b480-939133f9848f","Type":"ContainerDied","Data":"fe611adf5e7e8af4eeb6f04c0923f019f55e5710efb57e7d31e4ce1f3dc6b951"} Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.309722 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"c5565ba6-24db-449e-b480-939133f9848f","Type":"ContainerDied","Data":"1d91535d83c507a1685553695491a654aeb60b22b05b45486522495eed2ef658"} Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.309732 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d91535d83c507a1685553695491a654aeb60b22b05b45486522495eed2ef658" Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.311945 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"718480bb-da0c-488e-a922-86681c5516a4","Type":"ContainerStarted","Data":"80563631b660a24bd6912ab75c50215c0cbac6aa4d2ba06f8efb4bcdb8957a55"} Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.342158 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.342113698 podStartE2EDuration="2.342113698s" podCreationTimestamp="2026-01-27 13:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:38:02.325459682 +0000 UTC m=+1865.536073801" watchObservedRunningTime="2026-01-27 13:38:02.342113698 +0000 UTC m=+1865.552727817" Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.347457 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.347442124 podStartE2EDuration="2.347442124s" podCreationTimestamp="2026-01-27 13:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:38:02.341138121 +0000 UTC m=+1865.551752270" watchObservedRunningTime="2026-01-27 13:38:02.347442124 +0000 UTC m=+1865.558056243" Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.363563 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.465228 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:38:02 crc kubenswrapper[4786]: E0127 13:38:02.465429 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.539353 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5565ba6-24db-449e-b480-939133f9848f-config-data\") pod \"c5565ba6-24db-449e-b480-939133f9848f\" (UID: \"c5565ba6-24db-449e-b480-939133f9848f\") " Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.539442 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5kfq\" (UniqueName: \"kubernetes.io/projected/c5565ba6-24db-449e-b480-939133f9848f-kube-api-access-x5kfq\") pod \"c5565ba6-24db-449e-b480-939133f9848f\" (UID: \"c5565ba6-24db-449e-b480-939133f9848f\") " Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.539497 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5565ba6-24db-449e-b480-939133f9848f-logs\") pod \"c5565ba6-24db-449e-b480-939133f9848f\" (UID: \"c5565ba6-24db-449e-b480-939133f9848f\") " Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.540126 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5565ba6-24db-449e-b480-939133f9848f-logs" (OuterVolumeSpecName: "logs") pod "c5565ba6-24db-449e-b480-939133f9848f" (UID: "c5565ba6-24db-449e-b480-939133f9848f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.553296 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5565ba6-24db-449e-b480-939133f9848f-kube-api-access-x5kfq" (OuterVolumeSpecName: "kube-api-access-x5kfq") pod "c5565ba6-24db-449e-b480-939133f9848f" (UID: "c5565ba6-24db-449e-b480-939133f9848f"). InnerVolumeSpecName "kube-api-access-x5kfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.571917 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5565ba6-24db-449e-b480-939133f9848f-config-data" (OuterVolumeSpecName: "config-data") pod "c5565ba6-24db-449e-b480-939133f9848f" (UID: "c5565ba6-24db-449e-b480-939133f9848f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.641882 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5565ba6-24db-449e-b480-939133f9848f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.641918 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5kfq\" (UniqueName: \"kubernetes.io/projected/c5565ba6-24db-449e-b480-939133f9848f-kube-api-access-x5kfq\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:02 crc kubenswrapper[4786]: I0127 13:38:02.641931 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5565ba6-24db-449e-b480-939133f9848f-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.318779 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.350338 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.367695 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.378948 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:38:03 crc kubenswrapper[4786]: E0127 13:38:03.379330 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5565ba6-24db-449e-b480-939133f9848f" containerName="nova-kuttl-api-log" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.379352 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5565ba6-24db-449e-b480-939133f9848f" containerName="nova-kuttl-api-log" Jan 27 13:38:03 crc kubenswrapper[4786]: E0127 13:38:03.379392 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5565ba6-24db-449e-b480-939133f9848f" containerName="nova-kuttl-api-api" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.379401 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5565ba6-24db-449e-b480-939133f9848f" containerName="nova-kuttl-api-api" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.379577 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5565ba6-24db-449e-b480-939133f9848f" containerName="nova-kuttl-api-api" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.379624 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5565ba6-24db-449e-b480-939133f9848f" containerName="nova-kuttl-api-log" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.380637 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.383636 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.388240 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.475185 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5565ba6-24db-449e-b480-939133f9848f" path="/var/lib/kubelet/pods/c5565ba6-24db-449e-b480-939133f9848f/volumes" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.556625 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lrxc\" (UniqueName: \"kubernetes.io/projected/b9fb56bf-45dd-4959-960a-be52c9c66f4c-kube-api-access-6lrxc\") pod \"nova-kuttl-api-0\" (UID: \"b9fb56bf-45dd-4959-960a-be52c9c66f4c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.556744 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9fb56bf-45dd-4959-960a-be52c9c66f4c-config-data\") pod \"nova-kuttl-api-0\" (UID: \"b9fb56bf-45dd-4959-960a-be52c9c66f4c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.556893 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9fb56bf-45dd-4959-960a-be52c9c66f4c-logs\") pod \"nova-kuttl-api-0\" (UID: \"b9fb56bf-45dd-4959-960a-be52c9c66f4c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.658585 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9fb56bf-45dd-4959-960a-be52c9c66f4c-config-data\") pod \"nova-kuttl-api-0\" (UID: \"b9fb56bf-45dd-4959-960a-be52c9c66f4c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.658684 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9fb56bf-45dd-4959-960a-be52c9c66f4c-logs\") pod \"nova-kuttl-api-0\" (UID: \"b9fb56bf-45dd-4959-960a-be52c9c66f4c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.658783 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lrxc\" (UniqueName: \"kubernetes.io/projected/b9fb56bf-45dd-4959-960a-be52c9c66f4c-kube-api-access-6lrxc\") pod \"nova-kuttl-api-0\" (UID: \"b9fb56bf-45dd-4959-960a-be52c9c66f4c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.659286 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9fb56bf-45dd-4959-960a-be52c9c66f4c-logs\") pod \"nova-kuttl-api-0\" (UID: \"b9fb56bf-45dd-4959-960a-be52c9c66f4c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.672303 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9fb56bf-45dd-4959-960a-be52c9c66f4c-config-data\") pod \"nova-kuttl-api-0\" (UID: \"b9fb56bf-45dd-4959-960a-be52c9c66f4c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.687109 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lrxc\" (UniqueName: \"kubernetes.io/projected/b9fb56bf-45dd-4959-960a-be52c9c66f4c-kube-api-access-6lrxc\") pod \"nova-kuttl-api-0\" (UID: \"b9fb56bf-45dd-4959-960a-be52c9c66f4c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:03 crc kubenswrapper[4786]: I0127 13:38:03.703710 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:04 crc kubenswrapper[4786]: I0127 13:38:04.120326 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:38:04 crc kubenswrapper[4786]: I0127 13:38:04.327747 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"b9fb56bf-45dd-4959-960a-be52c9c66f4c","Type":"ContainerStarted","Data":"7837ba8c5380789808c429ecb5888f62e511d2059b5b15424166ed15c3b873bf"} Jan 27 13:38:05 crc kubenswrapper[4786]: I0127 13:38:05.337774 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"b9fb56bf-45dd-4959-960a-be52c9c66f4c","Type":"ContainerStarted","Data":"479011719da3d55374c9abd8377015b6cb000da09db2cbe52a7feb1e1b76d20b"} Jan 27 13:38:05 crc kubenswrapper[4786]: I0127 13:38:05.338242 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"b9fb56bf-45dd-4959-960a-be52c9c66f4c","Type":"ContainerStarted","Data":"32bad080f544afdea35cbf47134fcdba54fcb3f4b5b3559a01b126d4e075ffd3"} Jan 27 13:38:05 crc kubenswrapper[4786]: E0127 13:38:05.363547 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:05 crc kubenswrapper[4786]: E0127 13:38:05.365999 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:05 crc kubenswrapper[4786]: I0127 13:38:05.366510 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.36648457 podStartE2EDuration="2.36648457s" podCreationTimestamp="2026-01-27 13:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:38:05.360015854 +0000 UTC m=+1868.570629973" watchObservedRunningTime="2026-01-27 13:38:05.36648457 +0000 UTC m=+1868.577098689" Jan 27 13:38:05 crc kubenswrapper[4786]: E0127 13:38:05.367377 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:05 crc kubenswrapper[4786]: E0127 13:38:05.367427 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="b59b5bdb-15fd-4a34-bdcc-1b64a3795f10" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:38:05 crc kubenswrapper[4786]: I0127 13:38:05.691254 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.024877 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.208864 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade06765-40a5-4b74-a4fb-89726ae6c9d8-config-data\") pod \"ade06765-40a5-4b74-a4fb-89726ae6c9d8\" (UID: \"ade06765-40a5-4b74-a4fb-89726ae6c9d8\") " Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.208951 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg69h\" (UniqueName: \"kubernetes.io/projected/ade06765-40a5-4b74-a4fb-89726ae6c9d8-kube-api-access-kg69h\") pod \"ade06765-40a5-4b74-a4fb-89726ae6c9d8\" (UID: \"ade06765-40a5-4b74-a4fb-89726ae6c9d8\") " Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.214133 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade06765-40a5-4b74-a4fb-89726ae6c9d8-kube-api-access-kg69h" (OuterVolumeSpecName: "kube-api-access-kg69h") pod "ade06765-40a5-4b74-a4fb-89726ae6c9d8" (UID: "ade06765-40a5-4b74-a4fb-89726ae6c9d8"). InnerVolumeSpecName "kube-api-access-kg69h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.232828 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade06765-40a5-4b74-a4fb-89726ae6c9d8-config-data" (OuterVolumeSpecName: "config-data") pod "ade06765-40a5-4b74-a4fb-89726ae6c9d8" (UID: "ade06765-40a5-4b74-a4fb-89726ae6c9d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.310579 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade06765-40a5-4b74-a4fb-89726ae6c9d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.310621 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg69h\" (UniqueName: \"kubernetes.io/projected/ade06765-40a5-4b74-a4fb-89726ae6c9d8-kube-api-access-kg69h\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.353774 4786 generic.go:334] "Generic (PLEG): container finished" podID="ade06765-40a5-4b74-a4fb-89726ae6c9d8" containerID="a25ca61f1a71b745764506d5e7439363114124b732141b028488fbb2ea11b7f1" exitCode=0 Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.353821 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.353836 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"ade06765-40a5-4b74-a4fb-89726ae6c9d8","Type":"ContainerDied","Data":"a25ca61f1a71b745764506d5e7439363114124b732141b028488fbb2ea11b7f1"} Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.354236 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"ade06765-40a5-4b74-a4fb-89726ae6c9d8","Type":"ContainerDied","Data":"e84bb6ad2c5c015b349e6acfb82446c1b73d483ec44e4f5f3060815d06c1874e"} Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.354257 4786 scope.go:117] "RemoveContainer" containerID="a25ca61f1a71b745764506d5e7439363114124b732141b028488fbb2ea11b7f1" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.381564 4786 scope.go:117] "RemoveContainer" containerID="a25ca61f1a71b745764506d5e7439363114124b732141b028488fbb2ea11b7f1" Jan 27 13:38:07 crc kubenswrapper[4786]: E0127 13:38:07.381936 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a25ca61f1a71b745764506d5e7439363114124b732141b028488fbb2ea11b7f1\": container with ID starting with a25ca61f1a71b745764506d5e7439363114124b732141b028488fbb2ea11b7f1 not found: ID does not exist" containerID="a25ca61f1a71b745764506d5e7439363114124b732141b028488fbb2ea11b7f1" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.381963 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a25ca61f1a71b745764506d5e7439363114124b732141b028488fbb2ea11b7f1"} err="failed to get container status \"a25ca61f1a71b745764506d5e7439363114124b732141b028488fbb2ea11b7f1\": rpc error: code = NotFound desc = could not find container \"a25ca61f1a71b745764506d5e7439363114124b732141b028488fbb2ea11b7f1\": container with ID starting with a25ca61f1a71b745764506d5e7439363114124b732141b028488fbb2ea11b7f1 not found: ID does not exist" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.383104 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.390285 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.403402 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:38:07 crc kubenswrapper[4786]: E0127 13:38:07.404411 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade06765-40a5-4b74-a4fb-89726ae6c9d8" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.404518 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade06765-40a5-4b74-a4fb-89726ae6c9d8" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.404814 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade06765-40a5-4b74-a4fb-89726ae6c9d8" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.405555 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.407512 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.416299 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.486067 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade06765-40a5-4b74-a4fb-89726ae6c9d8" path="/var/lib/kubelet/pods/ade06765-40a5-4b74-a4fb-89726ae6c9d8/volumes" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.512659 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2ch6\" (UniqueName: \"kubernetes.io/projected/d49776e0-cc83-407e-b380-ef9f91a06224-kube-api-access-r2ch6\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"d49776e0-cc83-407e-b380-ef9f91a06224\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.512735 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49776e0-cc83-407e-b380-ef9f91a06224-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"d49776e0-cc83-407e-b380-ef9f91a06224\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.614091 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2ch6\" (UniqueName: \"kubernetes.io/projected/d49776e0-cc83-407e-b380-ef9f91a06224-kube-api-access-r2ch6\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"d49776e0-cc83-407e-b380-ef9f91a06224\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.614182 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49776e0-cc83-407e-b380-ef9f91a06224-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"d49776e0-cc83-407e-b380-ef9f91a06224\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.618968 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49776e0-cc83-407e-b380-ef9f91a06224-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"d49776e0-cc83-407e-b380-ef9f91a06224\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.629969 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2ch6\" (UniqueName: \"kubernetes.io/projected/d49776e0-cc83-407e-b380-ef9f91a06224-kube-api-access-r2ch6\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"d49776e0-cc83-407e-b380-ef9f91a06224\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:07 crc kubenswrapper[4786]: I0127 13:38:07.728709 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:08 crc kubenswrapper[4786]: I0127 13:38:08.136110 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:38:08 crc kubenswrapper[4786]: I0127 13:38:08.362587 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"d49776e0-cc83-407e-b380-ef9f91a06224","Type":"ContainerStarted","Data":"04675d488705f8abfeb582e59a3ebbcf8327e162de471abb8235b590413c1583"} Jan 27 13:38:08 crc kubenswrapper[4786]: I0127 13:38:08.362645 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"d49776e0-cc83-407e-b380-ef9f91a06224","Type":"ContainerStarted","Data":"0488e8b0c5db53c95a589625c4718a3dd125767d637a8ae99114fabd72384d98"} Jan 27 13:38:08 crc kubenswrapper[4786]: I0127 13:38:08.362766 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:08 crc kubenswrapper[4786]: I0127 13:38:08.377963 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=1.37794122 podStartE2EDuration="1.37794122s" podCreationTimestamp="2026-01-27 13:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:38:08.377733334 +0000 UTC m=+1871.588347453" watchObservedRunningTime="2026-01-27 13:38:08.37794122 +0000 UTC m=+1871.588555339" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.257873 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.370385 4786 generic.go:334] "Generic (PLEG): container finished" podID="b59b5bdb-15fd-4a34-bdcc-1b64a3795f10" containerID="e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330" exitCode=0 Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.370418 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.370463 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"b59b5bdb-15fd-4a34-bdcc-1b64a3795f10","Type":"ContainerDied","Data":"e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330"} Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.370487 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"b59b5bdb-15fd-4a34-bdcc-1b64a3795f10","Type":"ContainerDied","Data":"2a04f7a22406ab1b29bd035bb62180e9d8da88d6e277ff1d653bac3c06e2e213"} Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.370506 4786 scope.go:117] "RemoveContainer" containerID="e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.390710 4786 scope.go:117] "RemoveContainer" containerID="e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330" Jan 27 13:38:09 crc kubenswrapper[4786]: E0127 13:38:09.391207 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330\": container with ID starting with e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330 not found: ID does not exist" containerID="e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.391240 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330"} err="failed to get container status \"e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330\": rpc error: code = NotFound desc = could not find container \"e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330\": container with ID starting with e92b6484e78557635a0a5c94a05ddbff2f4cfa37e72743a5308f37ddb9231330 not found: ID does not exist" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.446654 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59b5bdb-15fd-4a34-bdcc-1b64a3795f10-config-data\") pod \"b59b5bdb-15fd-4a34-bdcc-1b64a3795f10\" (UID: \"b59b5bdb-15fd-4a34-bdcc-1b64a3795f10\") " Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.446724 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbmjr\" (UniqueName: \"kubernetes.io/projected/b59b5bdb-15fd-4a34-bdcc-1b64a3795f10-kube-api-access-xbmjr\") pod \"b59b5bdb-15fd-4a34-bdcc-1b64a3795f10\" (UID: \"b59b5bdb-15fd-4a34-bdcc-1b64a3795f10\") " Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.456265 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b59b5bdb-15fd-4a34-bdcc-1b64a3795f10-kube-api-access-xbmjr" (OuterVolumeSpecName: "kube-api-access-xbmjr") pod "b59b5bdb-15fd-4a34-bdcc-1b64a3795f10" (UID: "b59b5bdb-15fd-4a34-bdcc-1b64a3795f10"). InnerVolumeSpecName "kube-api-access-xbmjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.493570 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b59b5bdb-15fd-4a34-bdcc-1b64a3795f10-config-data" (OuterVolumeSpecName: "config-data") pod "b59b5bdb-15fd-4a34-bdcc-1b64a3795f10" (UID: "b59b5bdb-15fd-4a34-bdcc-1b64a3795f10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.548892 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59b5bdb-15fd-4a34-bdcc-1b64a3795f10-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.548926 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbmjr\" (UniqueName: \"kubernetes.io/projected/b59b5bdb-15fd-4a34-bdcc-1b64a3795f10-kube-api-access-xbmjr\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.715533 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.730204 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.749968 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 27 13:38:09 crc kubenswrapper[4786]: E0127 13:38:09.750641 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b59b5bdb-15fd-4a34-bdcc-1b64a3795f10" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.750688 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b59b5bdb-15fd-4a34-bdcc-1b64a3795f10" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.751146 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b59b5bdb-15fd-4a34-bdcc-1b64a3795f10" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.751982 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.752118 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.756418 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-compute-fake1-compute-config-data" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.854881 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq98v\" (UniqueName: \"kubernetes.io/projected/3cf31c53-34d5-4acc-b59e-096cfe798213-kube-api-access-kq98v\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"3cf31c53-34d5-4acc-b59e-096cfe798213\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.855012 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"3cf31c53-34d5-4acc-b59e-096cfe798213\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.956928 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"3cf31c53-34d5-4acc-b59e-096cfe798213\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.957024 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq98v\" (UniqueName: \"kubernetes.io/projected/3cf31c53-34d5-4acc-b59e-096cfe798213-kube-api-access-kq98v\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"3cf31c53-34d5-4acc-b59e-096cfe798213\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.960727 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"3cf31c53-34d5-4acc-b59e-096cfe798213\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:09 crc kubenswrapper[4786]: I0127 13:38:09.976451 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq98v\" (UniqueName: \"kubernetes.io/projected/3cf31c53-34d5-4acc-b59e-096cfe798213-kube-api-access-kq98v\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"3cf31c53-34d5-4acc-b59e-096cfe798213\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:10 crc kubenswrapper[4786]: I0127 13:38:10.096399 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:10 crc kubenswrapper[4786]: I0127 13:38:10.492856 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 27 13:38:10 crc kubenswrapper[4786]: W0127 13:38:10.497157 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cf31c53_34d5_4acc_b59e_096cfe798213.slice/crio-1595b9288597317f6121426d08312149b66926944c088505fee808b664be3cd7 WatchSource:0}: Error finding container 1595b9288597317f6121426d08312149b66926944c088505fee808b664be3cd7: Status 404 returned error can't find the container with id 1595b9288597317f6121426d08312149b66926944c088505fee808b664be3cd7 Jan 27 13:38:10 crc kubenswrapper[4786]: I0127 13:38:10.691296 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:38:10 crc kubenswrapper[4786]: I0127 13:38:10.712585 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:38:10 crc kubenswrapper[4786]: I0127 13:38:10.714501 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:38:11 crc kubenswrapper[4786]: I0127 13:38:11.393370 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"3cf31c53-34d5-4acc-b59e-096cfe798213","Type":"ContainerStarted","Data":"aca47955fa0175ae9e4de8fb3b6e1e7d25150185bf68862db3843ed3dff294d1"} Jan 27 13:38:11 crc kubenswrapper[4786]: I0127 13:38:11.393429 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"3cf31c53-34d5-4acc-b59e-096cfe798213","Type":"ContainerStarted","Data":"1595b9288597317f6121426d08312149b66926944c088505fee808b664be3cd7"} Jan 27 13:38:11 crc kubenswrapper[4786]: I0127 13:38:11.393542 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:11 crc kubenswrapper[4786]: I0127 13:38:11.419623 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:38:11 crc kubenswrapper[4786]: I0127 13:38:11.423313 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:11 crc kubenswrapper[4786]: I0127 13:38:11.428777 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podStartSLOduration=2.428755775 podStartE2EDuration="2.428755775s" podCreationTimestamp="2026-01-27 13:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:38:11.421797505 +0000 UTC m=+1874.632411624" watchObservedRunningTime="2026-01-27 13:38:11.428755775 +0000 UTC m=+1874.639369894" Jan 27 13:38:11 crc kubenswrapper[4786]: I0127 13:38:11.476724 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b59b5bdb-15fd-4a34-bdcc-1b64a3795f10" path="/var/lib/kubelet/pods/b59b5bdb-15fd-4a34-bdcc-1b64a3795f10/volumes" Jan 27 13:38:13 crc kubenswrapper[4786]: I0127 13:38:13.706371 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:13 crc kubenswrapper[4786]: I0127 13:38:13.707714 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:14 crc kubenswrapper[4786]: I0127 13:38:14.419423 4786 generic.go:334] "Generic (PLEG): container finished" podID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerID="aca47955fa0175ae9e4de8fb3b6e1e7d25150185bf68862db3843ed3dff294d1" exitCode=0 Jan 27 13:38:14 crc kubenswrapper[4786]: I0127 13:38:14.419816 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"3cf31c53-34d5-4acc-b59e-096cfe798213","Type":"ContainerDied","Data":"aca47955fa0175ae9e4de8fb3b6e1e7d25150185bf68862db3843ed3dff294d1"} Jan 27 13:38:14 crc kubenswrapper[4786]: I0127 13:38:14.420441 4786 scope.go:117] "RemoveContainer" containerID="aca47955fa0175ae9e4de8fb3b6e1e7d25150185bf68862db3843ed3dff294d1" Jan 27 13:38:14 crc kubenswrapper[4786]: I0127 13:38:14.786786 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="b9fb56bf-45dd-4959-960a-be52c9c66f4c" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:38:14 crc kubenswrapper[4786]: I0127 13:38:14.786809 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="b9fb56bf-45dd-4959-960a-be52c9c66f4c" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:38:15 crc kubenswrapper[4786]: I0127 13:38:15.097260 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:15 crc kubenswrapper[4786]: I0127 13:38:15.429388 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"3cf31c53-34d5-4acc-b59e-096cfe798213","Type":"ContainerStarted","Data":"f4981bf8a8c2a954fa9a41a005d9f10fea717f207ce095af9a61a8b49b2cf2b5"} Jan 27 13:38:15 crc kubenswrapper[4786]: I0127 13:38:15.430383 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:15 crc kubenswrapper[4786]: I0127 13:38:15.461824 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:15 crc kubenswrapper[4786]: I0127 13:38:15.464656 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:38:15 crc kubenswrapper[4786]: E0127 13:38:15.464896 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:38:17 crc kubenswrapper[4786]: I0127 13:38:17.752724 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:18 crc kubenswrapper[4786]: I0127 13:38:18.455035 4786 generic.go:334] "Generic (PLEG): container finished" podID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerID="f4981bf8a8c2a954fa9a41a005d9f10fea717f207ce095af9a61a8b49b2cf2b5" exitCode=0 Jan 27 13:38:18 crc kubenswrapper[4786]: I0127 13:38:18.455075 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"3cf31c53-34d5-4acc-b59e-096cfe798213","Type":"ContainerDied","Data":"f4981bf8a8c2a954fa9a41a005d9f10fea717f207ce095af9a61a8b49b2cf2b5"} Jan 27 13:38:18 crc kubenswrapper[4786]: I0127 13:38:18.455119 4786 scope.go:117] "RemoveContainer" containerID="aca47955fa0175ae9e4de8fb3b6e1e7d25150185bf68862db3843ed3dff294d1" Jan 27 13:38:18 crc kubenswrapper[4786]: I0127 13:38:18.455616 4786 scope.go:117] "RemoveContainer" containerID="f4981bf8a8c2a954fa9a41a005d9f10fea717f207ce095af9a61a8b49b2cf2b5" Jan 27 13:38:18 crc kubenswrapper[4786]: E0127 13:38:18.455975 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-kuttl-cell1-compute-fake1-compute-compute\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-kuttl-cell1-compute-fake1-compute-compute pod=nova-kuttl-cell1-compute-fake1-compute-0_nova-kuttl-default(3cf31c53-34d5-4acc-b59e-096cfe798213)\"" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" Jan 27 13:38:20 crc kubenswrapper[4786]: I0127 13:38:20.097411 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:20 crc kubenswrapper[4786]: I0127 13:38:20.097993 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:20 crc kubenswrapper[4786]: I0127 13:38:20.099197 4786 scope.go:117] "RemoveContainer" containerID="f4981bf8a8c2a954fa9a41a005d9f10fea717f207ce095af9a61a8b49b2cf2b5" Jan 27 13:38:20 crc kubenswrapper[4786]: E0127 13:38:20.100045 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-kuttl-cell1-compute-fake1-compute-compute\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-kuttl-cell1-compute-fake1-compute-compute pod=nova-kuttl-cell1-compute-fake1-compute-0_nova-kuttl-default(3cf31c53-34d5-4acc-b59e-096cfe798213)\"" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" Jan 27 13:38:23 crc kubenswrapper[4786]: I0127 13:38:23.709839 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:23 crc kubenswrapper[4786]: I0127 13:38:23.711236 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:23 crc kubenswrapper[4786]: I0127 13:38:23.712389 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:23 crc kubenswrapper[4786]: I0127 13:38:23.714826 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:24 crc kubenswrapper[4786]: I0127 13:38:24.504900 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:24 crc kubenswrapper[4786]: I0127 13:38:24.514737 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:27 crc kubenswrapper[4786]: I0127 13:38:27.470929 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:38:27 crc kubenswrapper[4786]: E0127 13:38:27.471460 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:38:31 crc kubenswrapper[4786]: I0127 13:38:31.465408 4786 scope.go:117] "RemoveContainer" containerID="f4981bf8a8c2a954fa9a41a005d9f10fea717f207ce095af9a61a8b49b2cf2b5" Jan 27 13:38:32 crc kubenswrapper[4786]: I0127 13:38:32.569470 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"3cf31c53-34d5-4acc-b59e-096cfe798213","Type":"ContainerStarted","Data":"7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07"} Jan 27 13:38:32 crc kubenswrapper[4786]: I0127 13:38:32.570321 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:32 crc kubenswrapper[4786]: I0127 13:38:32.601460 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.670876 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7"] Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.679698 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-d82l7"] Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.700917 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw"] Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.712676 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-5smpw"] Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.722132 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2"] Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.729753 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-695l2"] Jan 27 13:38:33 crc kubenswrapper[4786]: E0127 13:38:33.736278 4786 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 27 13:38:33 crc kubenswrapper[4786]: E0127 13:38:33.736367 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data podName:3cf31c53-34d5-4acc-b59e-096cfe798213 nodeName:}" failed. No retries permitted until 2026-01-27 13:38:34.236344789 +0000 UTC m=+1897.446958968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "3cf31c53-34d5-4acc-b59e-096cfe798213") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.737297 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.824405 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.824870 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="f3baae91-6726-4018-ac0c-7036d4227441" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://e2d577b04a27cc87c7e9af3607c2886f80bd62473062d2658c9495af0b9702a8" gracePeriod=30 Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.831970 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell1935b-account-delete-xcg87"] Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.833628 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1935b-account-delete-xcg87" Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.838815 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6bee6ae-ecee-409c-85eb-e1145c64c6fa-operator-scripts\") pod \"novacell1935b-account-delete-xcg87\" (UID: \"c6bee6ae-ecee-409c-85eb-e1145c64c6fa\") " pod="nova-kuttl-default/novacell1935b-account-delete-xcg87" Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.838854 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwbk7\" (UniqueName: \"kubernetes.io/projected/c6bee6ae-ecee-409c-85eb-e1145c64c6fa-kube-api-access-vwbk7\") pod \"novacell1935b-account-delete-xcg87\" (UID: \"c6bee6ae-ecee-409c-85eb-e1145c64c6fa\") " pod="nova-kuttl-default/novacell1935b-account-delete-xcg87" Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.843526 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1935b-account-delete-xcg87"] Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.871562 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.875029 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="2942d84f-4aa3-4aa8-bec5-baf8bb43765c" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://0ba2be3e8bc554522c9b257099bb1983905bb3c9d18210949c86baf3c51be49b" gracePeriod=30 Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.871869 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="2942d84f-4aa3-4aa8-bec5-baf8bb43765c" containerName="nova-kuttl-metadata-log" containerID="cri-o://d8d0ad3f35046e9b662e47aba8749d406208143400c1c05577f7eb946550ca3c" gracePeriod=30 Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.927428 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.927752 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="b9fb56bf-45dd-4959-960a-be52c9c66f4c" containerName="nova-kuttl-api-log" containerID="cri-o://32bad080f544afdea35cbf47134fcdba54fcb3f4b5b3559a01b126d4e075ffd3" gracePeriod=30 Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.927925 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="b9fb56bf-45dd-4959-960a-be52c9c66f4c" containerName="nova-kuttl-api-api" containerID="cri-o://479011719da3d55374c9abd8377015b6cb000da09db2cbe52a7feb1e1b76d20b" gracePeriod=30 Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.940017 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6bee6ae-ecee-409c-85eb-e1145c64c6fa-operator-scripts\") pod \"novacell1935b-account-delete-xcg87\" (UID: \"c6bee6ae-ecee-409c-85eb-e1145c64c6fa\") " pod="nova-kuttl-default/novacell1935b-account-delete-xcg87" Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.940060 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwbk7\" (UniqueName: \"kubernetes.io/projected/c6bee6ae-ecee-409c-85eb-e1145c64c6fa-kube-api-access-vwbk7\") pod \"novacell1935b-account-delete-xcg87\" (UID: \"c6bee6ae-ecee-409c-85eb-e1145c64c6fa\") " pod="nova-kuttl-default/novacell1935b-account-delete-xcg87" Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.940949 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6bee6ae-ecee-409c-85eb-e1145c64c6fa-operator-scripts\") pod \"novacell1935b-account-delete-xcg87\" (UID: \"c6bee6ae-ecee-409c-85eb-e1145c64c6fa\") " pod="nova-kuttl-default/novacell1935b-account-delete-xcg87" Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.949669 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapib196-account-delete-4bvwx"] Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.950794 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapib196-account-delete-4bvwx" Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.967391 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapib196-account-delete-4bvwx"] Jan 27 13:38:33 crc kubenswrapper[4786]: I0127 13:38:33.993650 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwbk7\" (UniqueName: \"kubernetes.io/projected/c6bee6ae-ecee-409c-85eb-e1145c64c6fa-kube-api-access-vwbk7\") pod \"novacell1935b-account-delete-xcg87\" (UID: \"c6bee6ae-ecee-409c-85eb-e1145c64c6fa\") " pod="nova-kuttl-default/novacell1935b-account-delete-xcg87" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:33.999974 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell04ccc-account-delete-btv8j"] Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.001206 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell04ccc-account-delete-btv8j" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.017142 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell04ccc-account-delete-btv8j"] Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.053399 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7"] Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.073450 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-k6rb7"] Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.085342 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.085547 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="d49776e0-cc83-407e-b380-ef9f91a06224" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://04675d488705f8abfeb582e59a3ebbcf8327e162de471abb8235b590413c1583" gracePeriod=30 Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.111045 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.111238 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="914b9838-f13a-4f25-aaed-3cea9132ce49" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://d428eded70ad74b7bcf7fd1951f0b472ea643870ab4e0cc754b14502075094d1" gracePeriod=30 Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.122713 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq"] Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.134435 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-8cdhq"] Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.144629 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.145097 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="718480bb-da0c-488e-a922-86681c5516a4" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://80563631b660a24bd6912ab75c50215c0cbac6aa4d2ba06f8efb4bcdb8957a55" gracePeriod=30 Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.150674 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/addbbf63-4c8f-4cba-b4c6-7700a33cf878-operator-scripts\") pod \"novacell04ccc-account-delete-btv8j\" (UID: \"addbbf63-4c8f-4cba-b4c6-7700a33cf878\") " pod="nova-kuttl-default/novacell04ccc-account-delete-btv8j" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.150762 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmqhn\" (UniqueName: \"kubernetes.io/projected/addbbf63-4c8f-4cba-b4c6-7700a33cf878-kube-api-access-gmqhn\") pod \"novacell04ccc-account-delete-btv8j\" (UID: \"addbbf63-4c8f-4cba-b4c6-7700a33cf878\") " pod="nova-kuttl-default/novacell04ccc-account-delete-btv8j" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.150810 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxgn6\" (UniqueName: \"kubernetes.io/projected/ef3b9fa2-2177-44b8-966a-e164c7751b1b-kube-api-access-cxgn6\") pod \"novaapib196-account-delete-4bvwx\" (UID: \"ef3b9fa2-2177-44b8-966a-e164c7751b1b\") " pod="nova-kuttl-default/novaapib196-account-delete-4bvwx" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.150860 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef3b9fa2-2177-44b8-966a-e164c7751b1b-operator-scripts\") pod \"novaapib196-account-delete-4bvwx\" (UID: \"ef3b9fa2-2177-44b8-966a-e164c7751b1b\") " pod="nova-kuttl-default/novaapib196-account-delete-4bvwx" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.152309 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1935b-account-delete-xcg87" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.252187 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxgn6\" (UniqueName: \"kubernetes.io/projected/ef3b9fa2-2177-44b8-966a-e164c7751b1b-kube-api-access-cxgn6\") pod \"novaapib196-account-delete-4bvwx\" (UID: \"ef3b9fa2-2177-44b8-966a-e164c7751b1b\") " pod="nova-kuttl-default/novaapib196-account-delete-4bvwx" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.252535 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef3b9fa2-2177-44b8-966a-e164c7751b1b-operator-scripts\") pod \"novaapib196-account-delete-4bvwx\" (UID: \"ef3b9fa2-2177-44b8-966a-e164c7751b1b\") " pod="nova-kuttl-default/novaapib196-account-delete-4bvwx" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.252676 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/addbbf63-4c8f-4cba-b4c6-7700a33cf878-operator-scripts\") pod \"novacell04ccc-account-delete-btv8j\" (UID: \"addbbf63-4c8f-4cba-b4c6-7700a33cf878\") " pod="nova-kuttl-default/novacell04ccc-account-delete-btv8j" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.252742 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmqhn\" (UniqueName: \"kubernetes.io/projected/addbbf63-4c8f-4cba-b4c6-7700a33cf878-kube-api-access-gmqhn\") pod \"novacell04ccc-account-delete-btv8j\" (UID: \"addbbf63-4c8f-4cba-b4c6-7700a33cf878\") " pod="nova-kuttl-default/novacell04ccc-account-delete-btv8j" Jan 27 13:38:34 crc kubenswrapper[4786]: E0127 13:38:34.252850 4786 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 27 13:38:34 crc kubenswrapper[4786]: E0127 13:38:34.252920 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data podName:3cf31c53-34d5-4acc-b59e-096cfe798213 nodeName:}" failed. No retries permitted until 2026-01-27 13:38:35.252902381 +0000 UTC m=+1898.463516500 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "3cf31c53-34d5-4acc-b59e-096cfe798213") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.253925 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/addbbf63-4c8f-4cba-b4c6-7700a33cf878-operator-scripts\") pod \"novacell04ccc-account-delete-btv8j\" (UID: \"addbbf63-4c8f-4cba-b4c6-7700a33cf878\") " pod="nova-kuttl-default/novacell04ccc-account-delete-btv8j" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.253930 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef3b9fa2-2177-44b8-966a-e164c7751b1b-operator-scripts\") pod \"novaapib196-account-delete-4bvwx\" (UID: \"ef3b9fa2-2177-44b8-966a-e164c7751b1b\") " pod="nova-kuttl-default/novaapib196-account-delete-4bvwx" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.273949 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxgn6\" (UniqueName: \"kubernetes.io/projected/ef3b9fa2-2177-44b8-966a-e164c7751b1b-kube-api-access-cxgn6\") pod \"novaapib196-account-delete-4bvwx\" (UID: \"ef3b9fa2-2177-44b8-966a-e164c7751b1b\") " pod="nova-kuttl-default/novaapib196-account-delete-4bvwx" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.276730 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmqhn\" (UniqueName: \"kubernetes.io/projected/addbbf63-4c8f-4cba-b4c6-7700a33cf878-kube-api-access-gmqhn\") pod \"novacell04ccc-account-delete-btv8j\" (UID: \"addbbf63-4c8f-4cba-b4c6-7700a33cf878\") " pod="nova-kuttl-default/novacell04ccc-account-delete-btv8j" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.277096 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapib196-account-delete-4bvwx" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.406751 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell04ccc-account-delete-btv8j" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.606648 4786 generic.go:334] "Generic (PLEG): container finished" podID="b9fb56bf-45dd-4959-960a-be52c9c66f4c" containerID="32bad080f544afdea35cbf47134fcdba54fcb3f4b5b3559a01b126d4e075ffd3" exitCode=143 Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.606951 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"b9fb56bf-45dd-4959-960a-be52c9c66f4c","Type":"ContainerDied","Data":"32bad080f544afdea35cbf47134fcdba54fcb3f4b5b3559a01b126d4e075ffd3"} Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.610045 4786 generic.go:334] "Generic (PLEG): container finished" podID="f3baae91-6726-4018-ac0c-7036d4227441" containerID="e2d577b04a27cc87c7e9af3607c2886f80bd62473062d2658c9495af0b9702a8" exitCode=0 Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.610130 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"f3baae91-6726-4018-ac0c-7036d4227441","Type":"ContainerDied","Data":"e2d577b04a27cc87c7e9af3607c2886f80bd62473062d2658c9495af0b9702a8"} Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.615068 4786 generic.go:334] "Generic (PLEG): container finished" podID="2942d84f-4aa3-4aa8-bec5-baf8bb43765c" containerID="d8d0ad3f35046e9b662e47aba8749d406208143400c1c05577f7eb946550ca3c" exitCode=143 Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.615222 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2942d84f-4aa3-4aa8-bec5-baf8bb43765c","Type":"ContainerDied","Data":"d8d0ad3f35046e9b662e47aba8749d406208143400c1c05577f7eb946550ca3c"} Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.615514 4786 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" secret="" err="secret \"nova-nova-kuttl-dockercfg-7v8b9\" not found" Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.686522 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1935b-account-delete-xcg87"] Jan 27 13:38:34 crc kubenswrapper[4786]: W0127 13:38:34.695943 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6bee6ae_ecee_409c_85eb_e1145c64c6fa.slice/crio-9b1e5d98f9abe32f9515f83f8b71f8af61ac7aa7f413bed844770e98be703133 WatchSource:0}: Error finding container 9b1e5d98f9abe32f9515f83f8b71f8af61ac7aa7f413bed844770e98be703133: Status 404 returned error can't find the container with id 9b1e5d98f9abe32f9515f83f8b71f8af61ac7aa7f413bed844770e98be703133 Jan 27 13:38:34 crc kubenswrapper[4786]: I0127 13:38:34.904042 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapib196-account-delete-4bvwx"] Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.131338 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.154086 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell04ccc-account-delete-btv8j"] Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.280756 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3baae91-6726-4018-ac0c-7036d4227441-config-data\") pod \"f3baae91-6726-4018-ac0c-7036d4227441\" (UID: \"f3baae91-6726-4018-ac0c-7036d4227441\") " Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.280877 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlp8m\" (UniqueName: \"kubernetes.io/projected/f3baae91-6726-4018-ac0c-7036d4227441-kube-api-access-mlp8m\") pod \"f3baae91-6726-4018-ac0c-7036d4227441\" (UID: \"f3baae91-6726-4018-ac0c-7036d4227441\") " Jan 27 13:38:35 crc kubenswrapper[4786]: E0127 13:38:35.281370 4786 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 27 13:38:35 crc kubenswrapper[4786]: E0127 13:38:35.281457 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data podName:3cf31c53-34d5-4acc-b59e-096cfe798213 nodeName:}" failed. No retries permitted until 2026-01-27 13:38:37.281437693 +0000 UTC m=+1900.492051812 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "3cf31c53-34d5-4acc-b59e-096cfe798213") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.291222 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3baae91-6726-4018-ac0c-7036d4227441-kube-api-access-mlp8m" (OuterVolumeSpecName: "kube-api-access-mlp8m") pod "f3baae91-6726-4018-ac0c-7036d4227441" (UID: "f3baae91-6726-4018-ac0c-7036d4227441"). InnerVolumeSpecName "kube-api-access-mlp8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.335661 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3baae91-6726-4018-ac0c-7036d4227441-config-data" (OuterVolumeSpecName: "config-data") pod "f3baae91-6726-4018-ac0c-7036d4227441" (UID: "f3baae91-6726-4018-ac0c-7036d4227441"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.382908 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlp8m\" (UniqueName: \"kubernetes.io/projected/f3baae91-6726-4018-ac0c-7036d4227441-kube-api-access-mlp8m\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.383199 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3baae91-6726-4018-ac0c-7036d4227441-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.481358 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="109e73cd-4c9d-4841-9e95-950166f5cda0" path="/var/lib/kubelet/pods/109e73cd-4c9d-4841-9e95-950166f5cda0/volumes" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.482103 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582cddc3-f46b-47c9-8aad-1b38afaf5cc0" path="/var/lib/kubelet/pods/582cddc3-f46b-47c9-8aad-1b38afaf5cc0/volumes" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.482689 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d5d577e-adfd-4726-9fda-627da7dff544" path="/var/lib/kubelet/pods/8d5d577e-adfd-4726-9fda-627da7dff544/volumes" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.483142 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3fad1ba-93ff-4712-bfaa-17cb808e87f4" path="/var/lib/kubelet/pods/a3fad1ba-93ff-4712-bfaa-17cb808e87f4/volumes" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.490413 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f44e04-2290-4929-ac72-51dd6213663d" path="/var/lib/kubelet/pods/c5f44e04-2290-4929-ac72-51dd6213663d/volumes" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.623634 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.629940 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"f3baae91-6726-4018-ac0c-7036d4227441","Type":"ContainerDied","Data":"8fb288ec5bbb53204da3e9b98604a5259e505465e150e02edd047e90525b378e"} Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.630145 4786 scope.go:117] "RemoveContainer" containerID="e2d577b04a27cc87c7e9af3607c2886f80bd62473062d2658c9495af0b9702a8" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.630347 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.635424 4786 generic.go:334] "Generic (PLEG): container finished" podID="914b9838-f13a-4f25-aaed-3cea9132ce49" containerID="d428eded70ad74b7bcf7fd1951f0b472ea643870ab4e0cc754b14502075094d1" exitCode=0 Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.635496 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"914b9838-f13a-4f25-aaed-3cea9132ce49","Type":"ContainerDied","Data":"d428eded70ad74b7bcf7fd1951f0b472ea643870ab4e0cc754b14502075094d1"} Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.655913 4786 generic.go:334] "Generic (PLEG): container finished" podID="c6bee6ae-ecee-409c-85eb-e1145c64c6fa" containerID="a60420953614ee05eac611886326f87f4b11076588b138dada9e65c762ed4613" exitCode=0 Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.656337 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1935b-account-delete-xcg87" event={"ID":"c6bee6ae-ecee-409c-85eb-e1145c64c6fa","Type":"ContainerDied","Data":"a60420953614ee05eac611886326f87f4b11076588b138dada9e65c762ed4613"} Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.656395 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1935b-account-delete-xcg87" event={"ID":"c6bee6ae-ecee-409c-85eb-e1145c64c6fa","Type":"ContainerStarted","Data":"9b1e5d98f9abe32f9515f83f8b71f8af61ac7aa7f413bed844770e98be703133"} Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.664557 4786 generic.go:334] "Generic (PLEG): container finished" podID="d49776e0-cc83-407e-b380-ef9f91a06224" containerID="04675d488705f8abfeb582e59a3ebbcf8327e162de471abb8235b590413c1583" exitCode=0 Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.664593 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.664639 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"d49776e0-cc83-407e-b380-ef9f91a06224","Type":"ContainerDied","Data":"04675d488705f8abfeb582e59a3ebbcf8327e162de471abb8235b590413c1583"} Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.664665 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"d49776e0-cc83-407e-b380-ef9f91a06224","Type":"ContainerDied","Data":"0488e8b0c5db53c95a589625c4718a3dd125767d637a8ae99114fabd72384d98"} Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.676165 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell04ccc-account-delete-btv8j" event={"ID":"addbbf63-4c8f-4cba-b4c6-7700a33cf878","Type":"ContainerStarted","Data":"ce3f70fb4ef78c62b242eabf8f5ef01da30df28a09fd5d14378b0e16d710552f"} Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.676212 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell04ccc-account-delete-btv8j" event={"ID":"addbbf63-4c8f-4cba-b4c6-7700a33cf878","Type":"ContainerStarted","Data":"e308014a1f14188e41661c70477b3f71c4e1b1cce2f8ad77af7f46f88cfd862a"} Jan 27 13:38:35 crc kubenswrapper[4786]: E0127 13:38:35.676577 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d428eded70ad74b7bcf7fd1951f0b472ea643870ab4e0cc754b14502075094d1 is running failed: container process not found" containerID="d428eded70ad74b7bcf7fd1951f0b472ea643870ab4e0cc754b14502075094d1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:38:35 crc kubenswrapper[4786]: E0127 13:38:35.677736 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d428eded70ad74b7bcf7fd1951f0b472ea643870ab4e0cc754b14502075094d1 is running failed: container process not found" containerID="d428eded70ad74b7bcf7fd1951f0b472ea643870ab4e0cc754b14502075094d1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:38:35 crc kubenswrapper[4786]: E0127 13:38:35.678467 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d428eded70ad74b7bcf7fd1951f0b472ea643870ab4e0cc754b14502075094d1 is running failed: container process not found" containerID="d428eded70ad74b7bcf7fd1951f0b472ea643870ab4e0cc754b14502075094d1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 13:38:35 crc kubenswrapper[4786]: E0127 13:38:35.678499 4786 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d428eded70ad74b7bcf7fd1951f0b472ea643870ab4e0cc754b14502075094d1 is running failed: container process not found" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="914b9838-f13a-4f25-aaed-3cea9132ce49" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.678737 4786 scope.go:117] "RemoveContainer" containerID="04675d488705f8abfeb582e59a3ebbcf8327e162de471abb8235b590413c1583" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.679120 4786 generic.go:334] "Generic (PLEG): container finished" podID="ef3b9fa2-2177-44b8-966a-e164c7751b1b" containerID="a64e8d5c052cea1d1c15a1794e4af433d23ec8a5de369dfb33911a9fbfff6316" exitCode=0 Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.679287 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" containerID="cri-o://7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" gracePeriod=30 Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.679430 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapib196-account-delete-4bvwx" event={"ID":"ef3b9fa2-2177-44b8-966a-e164c7751b1b","Type":"ContainerDied","Data":"a64e8d5c052cea1d1c15a1794e4af433d23ec8a5de369dfb33911a9fbfff6316"} Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.679525 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapib196-account-delete-4bvwx" event={"ID":"ef3b9fa2-2177-44b8-966a-e164c7751b1b","Type":"ContainerStarted","Data":"e0675e4da00b668b4d9d2d0aa61aec6235abd3511dd4eb30f11edb93288291ba"} Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.700949 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.708978 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:38:35 crc kubenswrapper[4786]: E0127 13:38:35.716669 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="80563631b660a24bd6912ab75c50215c0cbac6aa4d2ba06f8efb4bcdb8957a55" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:38:35 crc kubenswrapper[4786]: E0127 13:38:35.724967 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="80563631b660a24bd6912ab75c50215c0cbac6aa4d2ba06f8efb4bcdb8957a55" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.732716 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novacell04ccc-account-delete-btv8j" podStartSLOduration=2.732700288 podStartE2EDuration="2.732700288s" podCreationTimestamp="2026-01-27 13:38:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:38:35.729977514 +0000 UTC m=+1898.940591643" watchObservedRunningTime="2026-01-27 13:38:35.732700288 +0000 UTC m=+1898.943314397" Jan 27 13:38:35 crc kubenswrapper[4786]: E0127 13:38:35.740967 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="80563631b660a24bd6912ab75c50215c0cbac6aa4d2ba06f8efb4bcdb8957a55" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:38:35 crc kubenswrapper[4786]: E0127 13:38:35.741045 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="718480bb-da0c-488e-a922-86681c5516a4" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.752597 4786 scope.go:117] "RemoveContainer" containerID="04675d488705f8abfeb582e59a3ebbcf8327e162de471abb8235b590413c1583" Jan 27 13:38:35 crc kubenswrapper[4786]: E0127 13:38:35.753135 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04675d488705f8abfeb582e59a3ebbcf8327e162de471abb8235b590413c1583\": container with ID starting with 04675d488705f8abfeb582e59a3ebbcf8327e162de471abb8235b590413c1583 not found: ID does not exist" containerID="04675d488705f8abfeb582e59a3ebbcf8327e162de471abb8235b590413c1583" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.753170 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04675d488705f8abfeb582e59a3ebbcf8327e162de471abb8235b590413c1583"} err="failed to get container status \"04675d488705f8abfeb582e59a3ebbcf8327e162de471abb8235b590413c1583\": rpc error: code = NotFound desc = could not find container \"04675d488705f8abfeb582e59a3ebbcf8327e162de471abb8235b590413c1583\": container with ID starting with 04675d488705f8abfeb582e59a3ebbcf8327e162de471abb8235b590413c1583 not found: ID does not exist" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.790069 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2ch6\" (UniqueName: \"kubernetes.io/projected/d49776e0-cc83-407e-b380-ef9f91a06224-kube-api-access-r2ch6\") pod \"d49776e0-cc83-407e-b380-ef9f91a06224\" (UID: \"d49776e0-cc83-407e-b380-ef9f91a06224\") " Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.790110 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49776e0-cc83-407e-b380-ef9f91a06224-config-data\") pod \"d49776e0-cc83-407e-b380-ef9f91a06224\" (UID: \"d49776e0-cc83-407e-b380-ef9f91a06224\") " Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.796873 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49776e0-cc83-407e-b380-ef9f91a06224-kube-api-access-r2ch6" (OuterVolumeSpecName: "kube-api-access-r2ch6") pod "d49776e0-cc83-407e-b380-ef9f91a06224" (UID: "d49776e0-cc83-407e-b380-ef9f91a06224"). InnerVolumeSpecName "kube-api-access-r2ch6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.826311 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49776e0-cc83-407e-b380-ef9f91a06224-config-data" (OuterVolumeSpecName: "config-data") pod "d49776e0-cc83-407e-b380-ef9f91a06224" (UID: "d49776e0-cc83-407e-b380-ef9f91a06224"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.891502 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.893658 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2ch6\" (UniqueName: \"kubernetes.io/projected/d49776e0-cc83-407e-b380-ef9f91a06224-kube-api-access-r2ch6\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.893678 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49776e0-cc83-407e-b380-ef9f91a06224-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.994924 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914b9838-f13a-4f25-aaed-3cea9132ce49-config-data\") pod \"914b9838-f13a-4f25-aaed-3cea9132ce49\" (UID: \"914b9838-f13a-4f25-aaed-3cea9132ce49\") " Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.994973 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb9px\" (UniqueName: \"kubernetes.io/projected/914b9838-f13a-4f25-aaed-3cea9132ce49-kube-api-access-qb9px\") pod \"914b9838-f13a-4f25-aaed-3cea9132ce49\" (UID: \"914b9838-f13a-4f25-aaed-3cea9132ce49\") " Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.997900 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:38:35 crc kubenswrapper[4786]: I0127 13:38:35.998418 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/914b9838-f13a-4f25-aaed-3cea9132ce49-kube-api-access-qb9px" (OuterVolumeSpecName: "kube-api-access-qb9px") pod "914b9838-f13a-4f25-aaed-3cea9132ce49" (UID: "914b9838-f13a-4f25-aaed-3cea9132ce49"). InnerVolumeSpecName "kube-api-access-qb9px". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:36 crc kubenswrapper[4786]: I0127 13:38:36.007704 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:38:36 crc kubenswrapper[4786]: I0127 13:38:36.029408 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914b9838-f13a-4f25-aaed-3cea9132ce49-config-data" (OuterVolumeSpecName: "config-data") pod "914b9838-f13a-4f25-aaed-3cea9132ce49" (UID: "914b9838-f13a-4f25-aaed-3cea9132ce49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:38:36 crc kubenswrapper[4786]: I0127 13:38:36.097129 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914b9838-f13a-4f25-aaed-3cea9132ce49-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:36 crc kubenswrapper[4786]: I0127 13:38:36.097172 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb9px\" (UniqueName: \"kubernetes.io/projected/914b9838-f13a-4f25-aaed-3cea9132ce49-kube-api-access-qb9px\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:36 crc kubenswrapper[4786]: I0127 13:38:36.690727 4786 generic.go:334] "Generic (PLEG): container finished" podID="addbbf63-4c8f-4cba-b4c6-7700a33cf878" containerID="ce3f70fb4ef78c62b242eabf8f5ef01da30df28a09fd5d14378b0e16d710552f" exitCode=0 Jan 27 13:38:36 crc kubenswrapper[4786]: I0127 13:38:36.690814 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell04ccc-account-delete-btv8j" event={"ID":"addbbf63-4c8f-4cba-b4c6-7700a33cf878","Type":"ContainerDied","Data":"ce3f70fb4ef78c62b242eabf8f5ef01da30df28a09fd5d14378b0e16d710552f"} Jan 27 13:38:36 crc kubenswrapper[4786]: I0127 13:38:36.693890 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"914b9838-f13a-4f25-aaed-3cea9132ce49","Type":"ContainerDied","Data":"c43c42090ff1e1bc570d843903d2eda5ae5bdc1e600bb896d18d2047b9e82528"} Jan 27 13:38:36 crc kubenswrapper[4786]: I0127 13:38:36.693945 4786 scope.go:117] "RemoveContainer" containerID="d428eded70ad74b7bcf7fd1951f0b472ea643870ab4e0cc754b14502075094d1" Jan 27 13:38:36 crc kubenswrapper[4786]: I0127 13:38:36.694051 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:38:36 crc kubenswrapper[4786]: I0127 13:38:36.734360 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:38:36 crc kubenswrapper[4786]: I0127 13:38:36.739680 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.086953 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapib196-account-delete-4bvwx" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.095979 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1935b-account-delete-xcg87" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.213552 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6bee6ae-ecee-409c-85eb-e1145c64c6fa-operator-scripts\") pod \"c6bee6ae-ecee-409c-85eb-e1145c64c6fa\" (UID: \"c6bee6ae-ecee-409c-85eb-e1145c64c6fa\") " Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.213629 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxgn6\" (UniqueName: \"kubernetes.io/projected/ef3b9fa2-2177-44b8-966a-e164c7751b1b-kube-api-access-cxgn6\") pod \"ef3b9fa2-2177-44b8-966a-e164c7751b1b\" (UID: \"ef3b9fa2-2177-44b8-966a-e164c7751b1b\") " Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.213722 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef3b9fa2-2177-44b8-966a-e164c7751b1b-operator-scripts\") pod \"ef3b9fa2-2177-44b8-966a-e164c7751b1b\" (UID: \"ef3b9fa2-2177-44b8-966a-e164c7751b1b\") " Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.213753 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwbk7\" (UniqueName: \"kubernetes.io/projected/c6bee6ae-ecee-409c-85eb-e1145c64c6fa-kube-api-access-vwbk7\") pod \"c6bee6ae-ecee-409c-85eb-e1145c64c6fa\" (UID: \"c6bee6ae-ecee-409c-85eb-e1145c64c6fa\") " Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.214120 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6bee6ae-ecee-409c-85eb-e1145c64c6fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6bee6ae-ecee-409c-85eb-e1145c64c6fa" (UID: "c6bee6ae-ecee-409c-85eb-e1145c64c6fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.214121 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef3b9fa2-2177-44b8-966a-e164c7751b1b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef3b9fa2-2177-44b8-966a-e164c7751b1b" (UID: "ef3b9fa2-2177-44b8-966a-e164c7751b1b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.219761 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6bee6ae-ecee-409c-85eb-e1145c64c6fa-kube-api-access-vwbk7" (OuterVolumeSpecName: "kube-api-access-vwbk7") pod "c6bee6ae-ecee-409c-85eb-e1145c64c6fa" (UID: "c6bee6ae-ecee-409c-85eb-e1145c64c6fa"). InnerVolumeSpecName "kube-api-access-vwbk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.221101 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef3b9fa2-2177-44b8-966a-e164c7751b1b-kube-api-access-cxgn6" (OuterVolumeSpecName: "kube-api-access-cxgn6") pod "ef3b9fa2-2177-44b8-966a-e164c7751b1b" (UID: "ef3b9fa2-2177-44b8-966a-e164c7751b1b"). InnerVolumeSpecName "kube-api-access-cxgn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.315804 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef3b9fa2-2177-44b8-966a-e164c7751b1b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.315837 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwbk7\" (UniqueName: \"kubernetes.io/projected/c6bee6ae-ecee-409c-85eb-e1145c64c6fa-kube-api-access-vwbk7\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.315846 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6bee6ae-ecee-409c-85eb-e1145c64c6fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.315855 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxgn6\" (UniqueName: \"kubernetes.io/projected/ef3b9fa2-2177-44b8-966a-e164c7751b1b-kube-api-access-cxgn6\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:37 crc kubenswrapper[4786]: E0127 13:38:37.315870 4786 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 27 13:38:37 crc kubenswrapper[4786]: E0127 13:38:37.315954 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data podName:3cf31c53-34d5-4acc-b59e-096cfe798213 nodeName:}" failed. No retries permitted until 2026-01-27 13:38:41.315937987 +0000 UTC m=+1904.526552106 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "3cf31c53-34d5-4acc-b59e-096cfe798213") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.480807 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="914b9838-f13a-4f25-aaed-3cea9132ce49" path="/var/lib/kubelet/pods/914b9838-f13a-4f25-aaed-3cea9132ce49/volumes" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.481477 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49776e0-cc83-407e-b380-ef9f91a06224" path="/var/lib/kubelet/pods/d49776e0-cc83-407e-b380-ef9f91a06224/volumes" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.482053 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3baae91-6726-4018-ac0c-7036d4227441" path="/var/lib/kubelet/pods/f3baae91-6726-4018-ac0c-7036d4227441/volumes" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.662023 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.704238 4786 generic.go:334] "Generic (PLEG): container finished" podID="b9fb56bf-45dd-4959-960a-be52c9c66f4c" containerID="479011719da3d55374c9abd8377015b6cb000da09db2cbe52a7feb1e1b76d20b" exitCode=0 Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.704288 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"b9fb56bf-45dd-4959-960a-be52c9c66f4c","Type":"ContainerDied","Data":"479011719da3d55374c9abd8377015b6cb000da09db2cbe52a7feb1e1b76d20b"} Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.713428 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1935b-account-delete-xcg87" event={"ID":"c6bee6ae-ecee-409c-85eb-e1145c64c6fa","Type":"ContainerDied","Data":"9b1e5d98f9abe32f9515f83f8b71f8af61ac7aa7f413bed844770e98be703133"} Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.713464 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1935b-account-delete-xcg87" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.713472 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b1e5d98f9abe32f9515f83f8b71f8af61ac7aa7f413bed844770e98be703133" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.725057 4786 generic.go:334] "Generic (PLEG): container finished" podID="2942d84f-4aa3-4aa8-bec5-baf8bb43765c" containerID="0ba2be3e8bc554522c9b257099bb1983905bb3c9d18210949c86baf3c51be49b" exitCode=0 Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.725126 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.725159 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2942d84f-4aa3-4aa8-bec5-baf8bb43765c","Type":"ContainerDied","Data":"0ba2be3e8bc554522c9b257099bb1983905bb3c9d18210949c86baf3c51be49b"} Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.725189 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"2942d84f-4aa3-4aa8-bec5-baf8bb43765c","Type":"ContainerDied","Data":"1bca17d079d725f15fdb7f654da41537a79b878cff66db5229a3cb14f0228335"} Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.725204 4786 scope.go:117] "RemoveContainer" containerID="0ba2be3e8bc554522c9b257099bb1983905bb3c9d18210949c86baf3c51be49b" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.730072 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapib196-account-delete-4bvwx" event={"ID":"ef3b9fa2-2177-44b8-966a-e164c7751b1b","Type":"ContainerDied","Data":"e0675e4da00b668b4d9d2d0aa61aec6235abd3511dd4eb30f11edb93288291ba"} Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.730097 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0675e4da00b668b4d9d2d0aa61aec6235abd3511dd4eb30f11edb93288291ba" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.730097 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapib196-account-delete-4bvwx" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.756840 4786 scope.go:117] "RemoveContainer" containerID="d8d0ad3f35046e9b662e47aba8749d406208143400c1c05577f7eb946550ca3c" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.783737 4786 scope.go:117] "RemoveContainer" containerID="0ba2be3e8bc554522c9b257099bb1983905bb3c9d18210949c86baf3c51be49b" Jan 27 13:38:37 crc kubenswrapper[4786]: E0127 13:38:37.784246 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ba2be3e8bc554522c9b257099bb1983905bb3c9d18210949c86baf3c51be49b\": container with ID starting with 0ba2be3e8bc554522c9b257099bb1983905bb3c9d18210949c86baf3c51be49b not found: ID does not exist" containerID="0ba2be3e8bc554522c9b257099bb1983905bb3c9d18210949c86baf3c51be49b" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.784291 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ba2be3e8bc554522c9b257099bb1983905bb3c9d18210949c86baf3c51be49b"} err="failed to get container status \"0ba2be3e8bc554522c9b257099bb1983905bb3c9d18210949c86baf3c51be49b\": rpc error: code = NotFound desc = could not find container \"0ba2be3e8bc554522c9b257099bb1983905bb3c9d18210949c86baf3c51be49b\": container with ID starting with 0ba2be3e8bc554522c9b257099bb1983905bb3c9d18210949c86baf3c51be49b not found: ID does not exist" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.784318 4786 scope.go:117] "RemoveContainer" containerID="d8d0ad3f35046e9b662e47aba8749d406208143400c1c05577f7eb946550ca3c" Jan 27 13:38:37 crc kubenswrapper[4786]: E0127 13:38:37.784934 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8d0ad3f35046e9b662e47aba8749d406208143400c1c05577f7eb946550ca3c\": container with ID starting with d8d0ad3f35046e9b662e47aba8749d406208143400c1c05577f7eb946550ca3c not found: ID does not exist" containerID="d8d0ad3f35046e9b662e47aba8749d406208143400c1c05577f7eb946550ca3c" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.784965 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8d0ad3f35046e9b662e47aba8749d406208143400c1c05577f7eb946550ca3c"} err="failed to get container status \"d8d0ad3f35046e9b662e47aba8749d406208143400c1c05577f7eb946550ca3c\": rpc error: code = NotFound desc = could not find container \"d8d0ad3f35046e9b662e47aba8749d406208143400c1c05577f7eb946550ca3c\": container with ID starting with d8d0ad3f35046e9b662e47aba8749d406208143400c1c05577f7eb946550ca3c not found: ID does not exist" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.824521 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk8fx\" (UniqueName: \"kubernetes.io/projected/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-kube-api-access-bk8fx\") pod \"2942d84f-4aa3-4aa8-bec5-baf8bb43765c\" (UID: \"2942d84f-4aa3-4aa8-bec5-baf8bb43765c\") " Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.824660 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-config-data\") pod \"2942d84f-4aa3-4aa8-bec5-baf8bb43765c\" (UID: \"2942d84f-4aa3-4aa8-bec5-baf8bb43765c\") " Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.824712 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-logs\") pod \"2942d84f-4aa3-4aa8-bec5-baf8bb43765c\" (UID: \"2942d84f-4aa3-4aa8-bec5-baf8bb43765c\") " Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.826033 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-logs" (OuterVolumeSpecName: "logs") pod "2942d84f-4aa3-4aa8-bec5-baf8bb43765c" (UID: "2942d84f-4aa3-4aa8-bec5-baf8bb43765c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.833434 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-kube-api-access-bk8fx" (OuterVolumeSpecName: "kube-api-access-bk8fx") pod "2942d84f-4aa3-4aa8-bec5-baf8bb43765c" (UID: "2942d84f-4aa3-4aa8-bec5-baf8bb43765c"). InnerVolumeSpecName "kube-api-access-bk8fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.838732 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.846966 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-config-data" (OuterVolumeSpecName: "config-data") pod "2942d84f-4aa3-4aa8-bec5-baf8bb43765c" (UID: "2942d84f-4aa3-4aa8-bec5-baf8bb43765c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.926132 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lrxc\" (UniqueName: \"kubernetes.io/projected/b9fb56bf-45dd-4959-960a-be52c9c66f4c-kube-api-access-6lrxc\") pod \"b9fb56bf-45dd-4959-960a-be52c9c66f4c\" (UID: \"b9fb56bf-45dd-4959-960a-be52c9c66f4c\") " Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.926270 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9fb56bf-45dd-4959-960a-be52c9c66f4c-logs\") pod \"b9fb56bf-45dd-4959-960a-be52c9c66f4c\" (UID: \"b9fb56bf-45dd-4959-960a-be52c9c66f4c\") " Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.926303 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9fb56bf-45dd-4959-960a-be52c9c66f4c-config-data\") pod \"b9fb56bf-45dd-4959-960a-be52c9c66f4c\" (UID: \"b9fb56bf-45dd-4959-960a-be52c9c66f4c\") " Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.926678 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.926697 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk8fx\" (UniqueName: \"kubernetes.io/projected/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-kube-api-access-bk8fx\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.926710 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2942d84f-4aa3-4aa8-bec5-baf8bb43765c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.926830 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9fb56bf-45dd-4959-960a-be52c9c66f4c-logs" (OuterVolumeSpecName: "logs") pod "b9fb56bf-45dd-4959-960a-be52c9c66f4c" (UID: "b9fb56bf-45dd-4959-960a-be52c9c66f4c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.931906 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9fb56bf-45dd-4959-960a-be52c9c66f4c-kube-api-access-6lrxc" (OuterVolumeSpecName: "kube-api-access-6lrxc") pod "b9fb56bf-45dd-4959-960a-be52c9c66f4c" (UID: "b9fb56bf-45dd-4959-960a-be52c9c66f4c"). InnerVolumeSpecName "kube-api-access-6lrxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:37 crc kubenswrapper[4786]: I0127 13:38:37.949030 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9fb56bf-45dd-4959-960a-be52c9c66f4c-config-data" (OuterVolumeSpecName: "config-data") pod "b9fb56bf-45dd-4959-960a-be52c9c66f4c" (UID: "b9fb56bf-45dd-4959-960a-be52c9c66f4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.027737 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9fb56bf-45dd-4959-960a-be52c9c66f4c-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.027764 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9fb56bf-45dd-4959-960a-be52c9c66f4c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.027775 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lrxc\" (UniqueName: \"kubernetes.io/projected/b9fb56bf-45dd-4959-960a-be52c9c66f4c-kube-api-access-6lrxc\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.098650 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell04ccc-account-delete-btv8j" Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.112594 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.121905 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.230429 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmqhn\" (UniqueName: \"kubernetes.io/projected/addbbf63-4c8f-4cba-b4c6-7700a33cf878-kube-api-access-gmqhn\") pod \"addbbf63-4c8f-4cba-b4c6-7700a33cf878\" (UID: \"addbbf63-4c8f-4cba-b4c6-7700a33cf878\") " Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.230481 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/addbbf63-4c8f-4cba-b4c6-7700a33cf878-operator-scripts\") pod \"addbbf63-4c8f-4cba-b4c6-7700a33cf878\" (UID: \"addbbf63-4c8f-4cba-b4c6-7700a33cf878\") " Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.231413 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/addbbf63-4c8f-4cba-b4c6-7700a33cf878-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "addbbf63-4c8f-4cba-b4c6-7700a33cf878" (UID: "addbbf63-4c8f-4cba-b4c6-7700a33cf878"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.235775 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/addbbf63-4c8f-4cba-b4c6-7700a33cf878-kube-api-access-gmqhn" (OuterVolumeSpecName: "kube-api-access-gmqhn") pod "addbbf63-4c8f-4cba-b4c6-7700a33cf878" (UID: "addbbf63-4c8f-4cba-b4c6-7700a33cf878"). InnerVolumeSpecName "kube-api-access-gmqhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.331598 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmqhn\" (UniqueName: \"kubernetes.io/projected/addbbf63-4c8f-4cba-b4c6-7700a33cf878-kube-api-access-gmqhn\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.331637 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/addbbf63-4c8f-4cba-b4c6-7700a33cf878-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.739567 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell04ccc-account-delete-btv8j" event={"ID":"addbbf63-4c8f-4cba-b4c6-7700a33cf878","Type":"ContainerDied","Data":"e308014a1f14188e41661c70477b3f71c4e1b1cce2f8ad77af7f46f88cfd862a"} Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.739860 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e308014a1f14188e41661c70477b3f71c4e1b1cce2f8ad77af7f46f88cfd862a" Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.739595 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell04ccc-account-delete-btv8j" Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.741470 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"b9fb56bf-45dd-4959-960a-be52c9c66f4c","Type":"ContainerDied","Data":"7837ba8c5380789808c429ecb5888f62e511d2059b5b15424166ed15c3b873bf"} Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.741516 4786 scope.go:117] "RemoveContainer" containerID="479011719da3d55374c9abd8377015b6cb000da09db2cbe52a7feb1e1b76d20b" Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.741525 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.783204 4786 scope.go:117] "RemoveContainer" containerID="32bad080f544afdea35cbf47134fcdba54fcb3f4b5b3559a01b126d4e075ffd3" Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.790611 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.797981 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.833116 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-cm58n"] Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.841226 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-cm58n"] Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.851897 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs"] Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.865440 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell1935b-account-delete-xcg87"] Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.872827 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-935b-account-create-update-hpggs"] Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.879027 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell1935b-account-delete-xcg87"] Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.964927 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-jqqcj"] Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.974670 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-jqqcj"] Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.980854 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-b196-account-create-update-tzbbp"] Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.987113 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapib196-account-delete-4bvwx"] Jan 27 13:38:38 crc kubenswrapper[4786]: I0127 13:38:38.994120 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-b196-account-create-update-tzbbp"] Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.001024 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapib196-account-delete-4bvwx"] Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.065274 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-km65q"] Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.076901 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-km65q"] Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.085165 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp"] Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.094354 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-4ccc-account-create-update-dpbzp"] Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.102590 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell04ccc-account-delete-btv8j"] Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.110108 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell04ccc-account-delete-btv8j"] Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.519050 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="020d5622-86fc-434d-95af-a38096706001" path="/var/lib/kubelet/pods/020d5622-86fc-434d-95af-a38096706001/volumes" Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.519637 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="040cc9fb-7f10-44a2-93f2-8249f45a9a59" path="/var/lib/kubelet/pods/040cc9fb-7f10-44a2-93f2-8249f45a9a59/volumes" Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.520143 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="232589d7-b1da-4835-9658-979713413d19" path="/var/lib/kubelet/pods/232589d7-b1da-4835-9658-979713413d19/volumes" Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.521173 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2942d84f-4aa3-4aa8-bec5-baf8bb43765c" path="/var/lib/kubelet/pods/2942d84f-4aa3-4aa8-bec5-baf8bb43765c/volumes" Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.521802 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="678eb9a6-5e7a-48f7-8843-bd3c3933b133" path="/var/lib/kubelet/pods/678eb9a6-5e7a-48f7-8843-bd3c3933b133/volumes" Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.522261 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="addbbf63-4c8f-4cba-b4c6-7700a33cf878" path="/var/lib/kubelet/pods/addbbf63-4c8f-4cba-b4c6-7700a33cf878/volumes" Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.522732 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9fb56bf-45dd-4959-960a-be52c9c66f4c" path="/var/lib/kubelet/pods/b9fb56bf-45dd-4959-960a-be52c9c66f4c/volumes" Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.532920 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6bee6ae-ecee-409c-85eb-e1145c64c6fa" path="/var/lib/kubelet/pods/c6bee6ae-ecee-409c-85eb-e1145c64c6fa/volumes" Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.533397 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d54ede2f-e6a9-495d-ae99-650d3465872f" path="/var/lib/kubelet/pods/d54ede2f-e6a9-495d-ae99-650d3465872f/volumes" Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.533920 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef3b9fa2-2177-44b8-966a-e164c7751b1b" path="/var/lib/kubelet/pods/ef3b9fa2-2177-44b8-966a-e164c7751b1b/volumes" Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.534854 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9c32b0d-83ad-46dd-b388-2723bde9de7f" path="/var/lib/kubelet/pods/f9c32b0d-83ad-46dd-b388-2723bde9de7f/volumes" Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.750924 4786 generic.go:334] "Generic (PLEG): container finished" podID="718480bb-da0c-488e-a922-86681c5516a4" containerID="80563631b660a24bd6912ab75c50215c0cbac6aa4d2ba06f8efb4bcdb8957a55" exitCode=0 Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.751259 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"718480bb-da0c-488e-a922-86681c5516a4","Type":"ContainerDied","Data":"80563631b660a24bd6912ab75c50215c0cbac6aa4d2ba06f8efb4bcdb8957a55"} Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.865024 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.965103 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/718480bb-da0c-488e-a922-86681c5516a4-config-data\") pod \"718480bb-da0c-488e-a922-86681c5516a4\" (UID: \"718480bb-da0c-488e-a922-86681c5516a4\") " Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.965238 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cklj\" (UniqueName: \"kubernetes.io/projected/718480bb-da0c-488e-a922-86681c5516a4-kube-api-access-9cklj\") pod \"718480bb-da0c-488e-a922-86681c5516a4\" (UID: \"718480bb-da0c-488e-a922-86681c5516a4\") " Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.969952 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/718480bb-da0c-488e-a922-86681c5516a4-kube-api-access-9cklj" (OuterVolumeSpecName: "kube-api-access-9cklj") pod "718480bb-da0c-488e-a922-86681c5516a4" (UID: "718480bb-da0c-488e-a922-86681c5516a4"). InnerVolumeSpecName "kube-api-access-9cklj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:39 crc kubenswrapper[4786]: I0127 13:38:39.994016 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/718480bb-da0c-488e-a922-86681c5516a4-config-data" (OuterVolumeSpecName: "config-data") pod "718480bb-da0c-488e-a922-86681c5516a4" (UID: "718480bb-da0c-488e-a922-86681c5516a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:38:40 crc kubenswrapper[4786]: I0127 13:38:40.067697 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/718480bb-da0c-488e-a922-86681c5516a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:40 crc kubenswrapper[4786]: I0127 13:38:40.067736 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cklj\" (UniqueName: \"kubernetes.io/projected/718480bb-da0c-488e-a922-86681c5516a4-kube-api-access-9cklj\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:40 crc kubenswrapper[4786]: E0127 13:38:40.098878 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:40 crc kubenswrapper[4786]: E0127 13:38:40.100655 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:40 crc kubenswrapper[4786]: E0127 13:38:40.103647 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:40 crc kubenswrapper[4786]: E0127 13:38:40.103680 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:38:40 crc kubenswrapper[4786]: I0127 13:38:40.761973 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"718480bb-da0c-488e-a922-86681c5516a4","Type":"ContainerDied","Data":"d53844abe3f78e17fa9568c9407a24ad131151d78f38c0af6973ef64d011d5f2"} Jan 27 13:38:40 crc kubenswrapper[4786]: I0127 13:38:40.762012 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:38:40 crc kubenswrapper[4786]: I0127 13:38:40.762033 4786 scope.go:117] "RemoveContainer" containerID="80563631b660a24bd6912ab75c50215c0cbac6aa4d2ba06f8efb4bcdb8957a55" Jan 27 13:38:40 crc kubenswrapper[4786]: I0127 13:38:40.794234 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:38:40 crc kubenswrapper[4786]: I0127 13:38:40.799924 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295133 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-62wq6"] Jan 27 13:38:41 crc kubenswrapper[4786]: E0127 13:38:41.295502 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2942d84f-4aa3-4aa8-bec5-baf8bb43765c" containerName="nova-kuttl-metadata-log" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295517 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2942d84f-4aa3-4aa8-bec5-baf8bb43765c" containerName="nova-kuttl-metadata-log" Jan 27 13:38:41 crc kubenswrapper[4786]: E0127 13:38:41.295532 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="718480bb-da0c-488e-a922-86681c5516a4" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295541 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="718480bb-da0c-488e-a922-86681c5516a4" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:38:41 crc kubenswrapper[4786]: E0127 13:38:41.295550 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9fb56bf-45dd-4959-960a-be52c9c66f4c" containerName="nova-kuttl-api-api" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295558 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9fb56bf-45dd-4959-960a-be52c9c66f4c" containerName="nova-kuttl-api-api" Jan 27 13:38:41 crc kubenswrapper[4786]: E0127 13:38:41.295588 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3baae91-6726-4018-ac0c-7036d4227441" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295596 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3baae91-6726-4018-ac0c-7036d4227441" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 27 13:38:41 crc kubenswrapper[4786]: E0127 13:38:41.295632 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9fb56bf-45dd-4959-960a-be52c9c66f4c" containerName="nova-kuttl-api-log" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295640 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9fb56bf-45dd-4959-960a-be52c9c66f4c" containerName="nova-kuttl-api-log" Jan 27 13:38:41 crc kubenswrapper[4786]: E0127 13:38:41.295654 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef3b9fa2-2177-44b8-966a-e164c7751b1b" containerName="mariadb-account-delete" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295663 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef3b9fa2-2177-44b8-966a-e164c7751b1b" containerName="mariadb-account-delete" Jan 27 13:38:41 crc kubenswrapper[4786]: E0127 13:38:41.295676 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914b9838-f13a-4f25-aaed-3cea9132ce49" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295683 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="914b9838-f13a-4f25-aaed-3cea9132ce49" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:38:41 crc kubenswrapper[4786]: E0127 13:38:41.295695 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2942d84f-4aa3-4aa8-bec5-baf8bb43765c" containerName="nova-kuttl-metadata-metadata" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295703 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2942d84f-4aa3-4aa8-bec5-baf8bb43765c" containerName="nova-kuttl-metadata-metadata" Jan 27 13:38:41 crc kubenswrapper[4786]: E0127 13:38:41.295718 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6bee6ae-ecee-409c-85eb-e1145c64c6fa" containerName="mariadb-account-delete" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295725 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6bee6ae-ecee-409c-85eb-e1145c64c6fa" containerName="mariadb-account-delete" Jan 27 13:38:41 crc kubenswrapper[4786]: E0127 13:38:41.295735 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="addbbf63-4c8f-4cba-b4c6-7700a33cf878" containerName="mariadb-account-delete" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295742 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="addbbf63-4c8f-4cba-b4c6-7700a33cf878" containerName="mariadb-account-delete" Jan 27 13:38:41 crc kubenswrapper[4786]: E0127 13:38:41.295757 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49776e0-cc83-407e-b380-ef9f91a06224" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295765 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49776e0-cc83-407e-b380-ef9f91a06224" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295944 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9fb56bf-45dd-4959-960a-be52c9c66f4c" containerName="nova-kuttl-api-api" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295964 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="914b9838-f13a-4f25-aaed-3cea9132ce49" containerName="nova-kuttl-cell1-conductor-conductor" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295974 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9fb56bf-45dd-4959-960a-be52c9c66f4c" containerName="nova-kuttl-api-log" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295983 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2942d84f-4aa3-4aa8-bec5-baf8bb43765c" containerName="nova-kuttl-metadata-metadata" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.295995 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="718480bb-da0c-488e-a922-86681c5516a4" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.296007 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="addbbf63-4c8f-4cba-b4c6-7700a33cf878" containerName="mariadb-account-delete" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.296020 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2942d84f-4aa3-4aa8-bec5-baf8bb43765c" containerName="nova-kuttl-metadata-log" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.296032 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef3b9fa2-2177-44b8-966a-e164c7751b1b" containerName="mariadb-account-delete" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.296041 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3baae91-6726-4018-ac0c-7036d4227441" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.296054 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6bee6ae-ecee-409c-85eb-e1145c64c6fa" containerName="mariadb-account-delete" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.296065 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49776e0-cc83-407e-b380-ef9f91a06224" containerName="nova-kuttl-cell0-conductor-conductor" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.296702 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-62wq6" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.306248 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-62wq6"] Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.386570 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd8k2\" (UniqueName: \"kubernetes.io/projected/49f94a84-88d4-4ac3-aa57-6c82d914b6e7-kube-api-access-dd8k2\") pod \"nova-api-db-create-62wq6\" (UID: \"49f94a84-88d4-4ac3-aa57-6c82d914b6e7\") " pod="nova-kuttl-default/nova-api-db-create-62wq6" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.386652 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49f94a84-88d4-4ac3-aa57-6c82d914b6e7-operator-scripts\") pod \"nova-api-db-create-62wq6\" (UID: \"49f94a84-88d4-4ac3-aa57-6c82d914b6e7\") " pod="nova-kuttl-default/nova-api-db-create-62wq6" Jan 27 13:38:41 crc kubenswrapper[4786]: E0127 13:38:41.386927 4786 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 27 13:38:41 crc kubenswrapper[4786]: E0127 13:38:41.387026 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data podName:3cf31c53-34d5-4acc-b59e-096cfe798213 nodeName:}" failed. No retries permitted until 2026-01-27 13:38:49.38697759 +0000 UTC m=+1912.597591709 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "3cf31c53-34d5-4acc-b59e-096cfe798213") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.393466 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-crwnd"] Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.394583 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-crwnd" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.404998 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-crwnd"] Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.476861 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="718480bb-da0c-488e-a922-86681c5516a4" path="/var/lib/kubelet/pods/718480bb-da0c-488e-a922-86681c5516a4/volumes" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.488206 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5d4fb63-2bb4-4949-b567-ae619a7925af-operator-scripts\") pod \"nova-cell0-db-create-crwnd\" (UID: \"c5d4fb63-2bb4-4949-b567-ae619a7925af\") " pod="nova-kuttl-default/nova-cell0-db-create-crwnd" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.488315 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd8k2\" (UniqueName: \"kubernetes.io/projected/49f94a84-88d4-4ac3-aa57-6c82d914b6e7-kube-api-access-dd8k2\") pod \"nova-api-db-create-62wq6\" (UID: \"49f94a84-88d4-4ac3-aa57-6c82d914b6e7\") " pod="nova-kuttl-default/nova-api-db-create-62wq6" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.488360 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49f94a84-88d4-4ac3-aa57-6c82d914b6e7-operator-scripts\") pod \"nova-api-db-create-62wq6\" (UID: \"49f94a84-88d4-4ac3-aa57-6c82d914b6e7\") " pod="nova-kuttl-default/nova-api-db-create-62wq6" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.488422 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74chp\" (UniqueName: \"kubernetes.io/projected/c5d4fb63-2bb4-4949-b567-ae619a7925af-kube-api-access-74chp\") pod \"nova-cell0-db-create-crwnd\" (UID: \"c5d4fb63-2bb4-4949-b567-ae619a7925af\") " pod="nova-kuttl-default/nova-cell0-db-create-crwnd" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.490991 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49f94a84-88d4-4ac3-aa57-6c82d914b6e7-operator-scripts\") pod \"nova-api-db-create-62wq6\" (UID: \"49f94a84-88d4-4ac3-aa57-6c82d914b6e7\") " pod="nova-kuttl-default/nova-api-db-create-62wq6" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.515111 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-30f1-account-create-update-lccvr"] Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.516022 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-30f1-account-create-update-lccvr" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.518379 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.518863 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd8k2\" (UniqueName: \"kubernetes.io/projected/49f94a84-88d4-4ac3-aa57-6c82d914b6e7-kube-api-access-dd8k2\") pod \"nova-api-db-create-62wq6\" (UID: \"49f94a84-88d4-4ac3-aa57-6c82d914b6e7\") " pod="nova-kuttl-default/nova-api-db-create-62wq6" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.523705 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-f6c4f"] Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.524757 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-f6c4f" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.532446 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-f6c4f"] Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.571453 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-30f1-account-create-update-lccvr"] Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.590020 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74chp\" (UniqueName: \"kubernetes.io/projected/c5d4fb63-2bb4-4949-b567-ae619a7925af-kube-api-access-74chp\") pod \"nova-cell0-db-create-crwnd\" (UID: \"c5d4fb63-2bb4-4949-b567-ae619a7925af\") " pod="nova-kuttl-default/nova-cell0-db-create-crwnd" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.590070 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4k2q\" (UniqueName: \"kubernetes.io/projected/f79b70f2-36e5-4532-bf69-a70a865afe9d-kube-api-access-l4k2q\") pod \"nova-api-30f1-account-create-update-lccvr\" (UID: \"f79b70f2-36e5-4532-bf69-a70a865afe9d\") " pod="nova-kuttl-default/nova-api-30f1-account-create-update-lccvr" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.590142 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5d4fb63-2bb4-4949-b567-ae619a7925af-operator-scripts\") pod \"nova-cell0-db-create-crwnd\" (UID: \"c5d4fb63-2bb4-4949-b567-ae619a7925af\") " pod="nova-kuttl-default/nova-cell0-db-create-crwnd" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.590179 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f79b70f2-36e5-4532-bf69-a70a865afe9d-operator-scripts\") pod \"nova-api-30f1-account-create-update-lccvr\" (UID: \"f79b70f2-36e5-4532-bf69-a70a865afe9d\") " pod="nova-kuttl-default/nova-api-30f1-account-create-update-lccvr" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.591357 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5d4fb63-2bb4-4949-b567-ae619a7925af-operator-scripts\") pod \"nova-cell0-db-create-crwnd\" (UID: \"c5d4fb63-2bb4-4949-b567-ae619a7925af\") " pod="nova-kuttl-default/nova-cell0-db-create-crwnd" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.613694 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74chp\" (UniqueName: \"kubernetes.io/projected/c5d4fb63-2bb4-4949-b567-ae619a7925af-kube-api-access-74chp\") pod \"nova-cell0-db-create-crwnd\" (UID: \"c5d4fb63-2bb4-4949-b567-ae619a7925af\") " pod="nova-kuttl-default/nova-cell0-db-create-crwnd" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.615185 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-62wq6" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.691500 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw2fc\" (UniqueName: \"kubernetes.io/projected/4c686636-04e0-4ea9-beda-5dfd5c05b477-kube-api-access-fw2fc\") pod \"nova-cell1-db-create-f6c4f\" (UID: \"4c686636-04e0-4ea9-beda-5dfd5c05b477\") " pod="nova-kuttl-default/nova-cell1-db-create-f6c4f" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.691571 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4k2q\" (UniqueName: \"kubernetes.io/projected/f79b70f2-36e5-4532-bf69-a70a865afe9d-kube-api-access-l4k2q\") pod \"nova-api-30f1-account-create-update-lccvr\" (UID: \"f79b70f2-36e5-4532-bf69-a70a865afe9d\") " pod="nova-kuttl-default/nova-api-30f1-account-create-update-lccvr" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.691638 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c686636-04e0-4ea9-beda-5dfd5c05b477-operator-scripts\") pod \"nova-cell1-db-create-f6c4f\" (UID: \"4c686636-04e0-4ea9-beda-5dfd5c05b477\") " pod="nova-kuttl-default/nova-cell1-db-create-f6c4f" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.691695 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f79b70f2-36e5-4532-bf69-a70a865afe9d-operator-scripts\") pod \"nova-api-30f1-account-create-update-lccvr\" (UID: \"f79b70f2-36e5-4532-bf69-a70a865afe9d\") " pod="nova-kuttl-default/nova-api-30f1-account-create-update-lccvr" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.692528 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f79b70f2-36e5-4532-bf69-a70a865afe9d-operator-scripts\") pod \"nova-api-30f1-account-create-update-lccvr\" (UID: \"f79b70f2-36e5-4532-bf69-a70a865afe9d\") " pod="nova-kuttl-default/nova-api-30f1-account-create-update-lccvr" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.700702 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c"] Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.701655 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.711925 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-crwnd" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.712198 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.725212 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4k2q\" (UniqueName: \"kubernetes.io/projected/f79b70f2-36e5-4532-bf69-a70a865afe9d-kube-api-access-l4k2q\") pod \"nova-api-30f1-account-create-update-lccvr\" (UID: \"f79b70f2-36e5-4532-bf69-a70a865afe9d\") " pod="nova-kuttl-default/nova-api-30f1-account-create-update-lccvr" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.733300 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c"] Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.793720 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfdrt\" (UniqueName: \"kubernetes.io/projected/ad4b32dd-6140-457a-88f3-b885d0a62c1e-kube-api-access-hfdrt\") pod \"nova-cell0-5daf-account-create-update-rl42c\" (UID: \"ad4b32dd-6140-457a-88f3-b885d0a62c1e\") " pod="nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.793806 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad4b32dd-6140-457a-88f3-b885d0a62c1e-operator-scripts\") pod \"nova-cell0-5daf-account-create-update-rl42c\" (UID: \"ad4b32dd-6140-457a-88f3-b885d0a62c1e\") " pod="nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.793839 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw2fc\" (UniqueName: \"kubernetes.io/projected/4c686636-04e0-4ea9-beda-5dfd5c05b477-kube-api-access-fw2fc\") pod \"nova-cell1-db-create-f6c4f\" (UID: \"4c686636-04e0-4ea9-beda-5dfd5c05b477\") " pod="nova-kuttl-default/nova-cell1-db-create-f6c4f" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.793904 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c686636-04e0-4ea9-beda-5dfd5c05b477-operator-scripts\") pod \"nova-cell1-db-create-f6c4f\" (UID: \"4c686636-04e0-4ea9-beda-5dfd5c05b477\") " pod="nova-kuttl-default/nova-cell1-db-create-f6c4f" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.794579 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c686636-04e0-4ea9-beda-5dfd5c05b477-operator-scripts\") pod \"nova-cell1-db-create-f6c4f\" (UID: \"4c686636-04e0-4ea9-beda-5dfd5c05b477\") " pod="nova-kuttl-default/nova-cell1-db-create-f6c4f" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.813534 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw2fc\" (UniqueName: \"kubernetes.io/projected/4c686636-04e0-4ea9-beda-5dfd5c05b477-kube-api-access-fw2fc\") pod \"nova-cell1-db-create-f6c4f\" (UID: \"4c686636-04e0-4ea9-beda-5dfd5c05b477\") " pod="nova-kuttl-default/nova-cell1-db-create-f6c4f" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.865106 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-30f1-account-create-update-lccvr" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.872918 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-f6c4f" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.895812 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfdrt\" (UniqueName: \"kubernetes.io/projected/ad4b32dd-6140-457a-88f3-b885d0a62c1e-kube-api-access-hfdrt\") pod \"nova-cell0-5daf-account-create-update-rl42c\" (UID: \"ad4b32dd-6140-457a-88f3-b885d0a62c1e\") " pod="nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.895933 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad4b32dd-6140-457a-88f3-b885d0a62c1e-operator-scripts\") pod \"nova-cell0-5daf-account-create-update-rl42c\" (UID: \"ad4b32dd-6140-457a-88f3-b885d0a62c1e\") " pod="nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.896747 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad4b32dd-6140-457a-88f3-b885d0a62c1e-operator-scripts\") pod \"nova-cell0-5daf-account-create-update-rl42c\" (UID: \"ad4b32dd-6140-457a-88f3-b885d0a62c1e\") " pod="nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.913281 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z"] Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.914503 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.917199 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.922085 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfdrt\" (UniqueName: \"kubernetes.io/projected/ad4b32dd-6140-457a-88f3-b885d0a62c1e-kube-api-access-hfdrt\") pod \"nova-cell0-5daf-account-create-update-rl42c\" (UID: \"ad4b32dd-6140-457a-88f3-b885d0a62c1e\") " pod="nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c" Jan 27 13:38:41 crc kubenswrapper[4786]: I0127 13:38:41.924544 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z"] Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.019049 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph997\" (UniqueName: \"kubernetes.io/projected/f2303d07-2542-43fe-9b22-243c5daa607b-kube-api-access-ph997\") pod \"nova-cell1-1417-account-create-update-hqr9z\" (UID: \"f2303d07-2542-43fe-9b22-243c5daa607b\") " pod="nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z" Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.019180 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2303d07-2542-43fe-9b22-243c5daa607b-operator-scripts\") pod \"nova-cell1-1417-account-create-update-hqr9z\" (UID: \"f2303d07-2542-43fe-9b22-243c5daa607b\") " pod="nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z" Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.093795 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c" Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.109157 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-62wq6"] Jan 27 13:38:42 crc kubenswrapper[4786]: W0127 13:38:42.111321 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49f94a84_88d4_4ac3_aa57_6c82d914b6e7.slice/crio-f01978611a02b6ce460012bbbffc2abaa06a397b296d230406424fa4f5264b1a WatchSource:0}: Error finding container f01978611a02b6ce460012bbbffc2abaa06a397b296d230406424fa4f5264b1a: Status 404 returned error can't find the container with id f01978611a02b6ce460012bbbffc2abaa06a397b296d230406424fa4f5264b1a Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.120097 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph997\" (UniqueName: \"kubernetes.io/projected/f2303d07-2542-43fe-9b22-243c5daa607b-kube-api-access-ph997\") pod \"nova-cell1-1417-account-create-update-hqr9z\" (UID: \"f2303d07-2542-43fe-9b22-243c5daa607b\") " pod="nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z" Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.120431 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2303d07-2542-43fe-9b22-243c5daa607b-operator-scripts\") pod \"nova-cell1-1417-account-create-update-hqr9z\" (UID: \"f2303d07-2542-43fe-9b22-243c5daa607b\") " pod="nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z" Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.121243 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2303d07-2542-43fe-9b22-243c5daa607b-operator-scripts\") pod \"nova-cell1-1417-account-create-update-hqr9z\" (UID: \"f2303d07-2542-43fe-9b22-243c5daa607b\") " pod="nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z" Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.139137 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph997\" (UniqueName: \"kubernetes.io/projected/f2303d07-2542-43fe-9b22-243c5daa607b-kube-api-access-ph997\") pod \"nova-cell1-1417-account-create-update-hqr9z\" (UID: \"f2303d07-2542-43fe-9b22-243c5daa607b\") " pod="nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z" Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.232242 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z" Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.258808 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-crwnd"] Jan 27 13:38:42 crc kubenswrapper[4786]: W0127 13:38:42.273771 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5d4fb63_2bb4_4949_b567_ae619a7925af.slice/crio-1f528cd6a4f64df884a4a9bb0cdff2b48c4a782b6aef32ec2233b198135ea54d WatchSource:0}: Error finding container 1f528cd6a4f64df884a4a9bb0cdff2b48c4a782b6aef32ec2233b198135ea54d: Status 404 returned error can't find the container with id 1f528cd6a4f64df884a4a9bb0cdff2b48c4a782b6aef32ec2233b198135ea54d Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.465138 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.807652 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerStarted","Data":"5a4c20f9aecd7c4c87050e3915ee6dd31d35d262bd6e9faa3426ca360fe926a7"} Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.809421 4786 generic.go:334] "Generic (PLEG): container finished" podID="49f94a84-88d4-4ac3-aa57-6c82d914b6e7" containerID="f17dad222062c6569f0aa175518443806eeaf02aa92feb3087596a3540fb614d" exitCode=0 Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.809502 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-62wq6" event={"ID":"49f94a84-88d4-4ac3-aa57-6c82d914b6e7","Type":"ContainerDied","Data":"f17dad222062c6569f0aa175518443806eeaf02aa92feb3087596a3540fb614d"} Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.809536 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-62wq6" event={"ID":"49f94a84-88d4-4ac3-aa57-6c82d914b6e7","Type":"ContainerStarted","Data":"f01978611a02b6ce460012bbbffc2abaa06a397b296d230406424fa4f5264b1a"} Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.811241 4786 generic.go:334] "Generic (PLEG): container finished" podID="c5d4fb63-2bb4-4949-b567-ae619a7925af" containerID="48030f2aedc0ff3b2765828ac7f449d0ddbfe897d6c3e980b2600baa24d51f54" exitCode=0 Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.811283 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-crwnd" event={"ID":"c5d4fb63-2bb4-4949-b567-ae619a7925af","Type":"ContainerDied","Data":"48030f2aedc0ff3b2765828ac7f449d0ddbfe897d6c3e980b2600baa24d51f54"} Jan 27 13:38:42 crc kubenswrapper[4786]: I0127 13:38:42.811312 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-crwnd" event={"ID":"c5d4fb63-2bb4-4949-b567-ae619a7925af","Type":"ContainerStarted","Data":"1f528cd6a4f64df884a4a9bb0cdff2b48c4a782b6aef32ec2233b198135ea54d"} Jan 27 13:38:43 crc kubenswrapper[4786]: I0127 13:38:43.028939 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c"] Jan 27 13:38:43 crc kubenswrapper[4786]: I0127 13:38:43.037683 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-f6c4f"] Jan 27 13:38:43 crc kubenswrapper[4786]: I0127 13:38:43.044108 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z"] Jan 27 13:38:43 crc kubenswrapper[4786]: W0127 13:38:43.062734 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c686636_04e0_4ea9_beda_5dfd5c05b477.slice/crio-39973ee9fa34e6952eeb57ff931eb5576c0d278c8668f18d73ef9ea870320a13 WatchSource:0}: Error finding container 39973ee9fa34e6952eeb57ff931eb5576c0d278c8668f18d73ef9ea870320a13: Status 404 returned error can't find the container with id 39973ee9fa34e6952eeb57ff931eb5576c0d278c8668f18d73ef9ea870320a13 Jan 27 13:38:43 crc kubenswrapper[4786]: I0127 13:38:43.072615 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-30f1-account-create-update-lccvr"] Jan 27 13:38:43 crc kubenswrapper[4786]: I0127 13:38:43.820339 4786 generic.go:334] "Generic (PLEG): container finished" podID="f79b70f2-36e5-4532-bf69-a70a865afe9d" containerID="5ff06bfc40717dc3741a9a264d3fd7980cfdf0c5ac710a0c00909104d0f9cd87" exitCode=0 Jan 27 13:38:43 crc kubenswrapper[4786]: I0127 13:38:43.820413 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-30f1-account-create-update-lccvr" event={"ID":"f79b70f2-36e5-4532-bf69-a70a865afe9d","Type":"ContainerDied","Data":"5ff06bfc40717dc3741a9a264d3fd7980cfdf0c5ac710a0c00909104d0f9cd87"} Jan 27 13:38:43 crc kubenswrapper[4786]: I0127 13:38:43.820710 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-30f1-account-create-update-lccvr" event={"ID":"f79b70f2-36e5-4532-bf69-a70a865afe9d","Type":"ContainerStarted","Data":"a6b405101cc8c7fc3987427fc9807f2bba86a85b0b26769e45aa5029e73a2f8d"} Jan 27 13:38:43 crc kubenswrapper[4786]: I0127 13:38:43.822411 4786 generic.go:334] "Generic (PLEG): container finished" podID="4c686636-04e0-4ea9-beda-5dfd5c05b477" containerID="ccda72274c49a1399c44f1d2aa6d70b6e4300bee6a2ad1515ee71f8a06269452" exitCode=0 Jan 27 13:38:43 crc kubenswrapper[4786]: I0127 13:38:43.822455 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-f6c4f" event={"ID":"4c686636-04e0-4ea9-beda-5dfd5c05b477","Type":"ContainerDied","Data":"ccda72274c49a1399c44f1d2aa6d70b6e4300bee6a2ad1515ee71f8a06269452"} Jan 27 13:38:43 crc kubenswrapper[4786]: I0127 13:38:43.822497 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-f6c4f" event={"ID":"4c686636-04e0-4ea9-beda-5dfd5c05b477","Type":"ContainerStarted","Data":"39973ee9fa34e6952eeb57ff931eb5576c0d278c8668f18d73ef9ea870320a13"} Jan 27 13:38:43 crc kubenswrapper[4786]: I0127 13:38:43.825228 4786 generic.go:334] "Generic (PLEG): container finished" podID="f2303d07-2542-43fe-9b22-243c5daa607b" containerID="8c5267e49b9b768500c27856362f92a8565baa189b728b6e4b567294c1488f13" exitCode=0 Jan 27 13:38:43 crc kubenswrapper[4786]: I0127 13:38:43.825271 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z" event={"ID":"f2303d07-2542-43fe-9b22-243c5daa607b","Type":"ContainerDied","Data":"8c5267e49b9b768500c27856362f92a8565baa189b728b6e4b567294c1488f13"} Jan 27 13:38:43 crc kubenswrapper[4786]: I0127 13:38:43.825288 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z" event={"ID":"f2303d07-2542-43fe-9b22-243c5daa607b","Type":"ContainerStarted","Data":"e9845827e3e17970e5716438d68f5cb2f314412a3f6a5bc5363913052bb68f7c"} Jan 27 13:38:43 crc kubenswrapper[4786]: I0127 13:38:43.828159 4786 generic.go:334] "Generic (PLEG): container finished" podID="ad4b32dd-6140-457a-88f3-b885d0a62c1e" containerID="95118eaea21b23cbce991522180f5bd3db8aaf8d3acb9f85d684091a40d8bc0c" exitCode=0 Jan 27 13:38:43 crc kubenswrapper[4786]: I0127 13:38:43.828335 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c" event={"ID":"ad4b32dd-6140-457a-88f3-b885d0a62c1e","Type":"ContainerDied","Data":"95118eaea21b23cbce991522180f5bd3db8aaf8d3acb9f85d684091a40d8bc0c"} Jan 27 13:38:43 crc kubenswrapper[4786]: I0127 13:38:43.828359 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c" event={"ID":"ad4b32dd-6140-457a-88f3-b885d0a62c1e","Type":"ContainerStarted","Data":"796bddf9121482e29774d07d13a70d8d52c636556a276b557a7472555bb53263"} Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.320585 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-62wq6" Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.397769 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-crwnd" Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.478748 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49f94a84-88d4-4ac3-aa57-6c82d914b6e7-operator-scripts\") pod \"49f94a84-88d4-4ac3-aa57-6c82d914b6e7\" (UID: \"49f94a84-88d4-4ac3-aa57-6c82d914b6e7\") " Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.478862 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd8k2\" (UniqueName: \"kubernetes.io/projected/49f94a84-88d4-4ac3-aa57-6c82d914b6e7-kube-api-access-dd8k2\") pod \"49f94a84-88d4-4ac3-aa57-6c82d914b6e7\" (UID: \"49f94a84-88d4-4ac3-aa57-6c82d914b6e7\") " Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.479826 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49f94a84-88d4-4ac3-aa57-6c82d914b6e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49f94a84-88d4-4ac3-aa57-6c82d914b6e7" (UID: "49f94a84-88d4-4ac3-aa57-6c82d914b6e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.486086 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f94a84-88d4-4ac3-aa57-6c82d914b6e7-kube-api-access-dd8k2" (OuterVolumeSpecName: "kube-api-access-dd8k2") pod "49f94a84-88d4-4ac3-aa57-6c82d914b6e7" (UID: "49f94a84-88d4-4ac3-aa57-6c82d914b6e7"). InnerVolumeSpecName "kube-api-access-dd8k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.580458 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5d4fb63-2bb4-4949-b567-ae619a7925af-operator-scripts\") pod \"c5d4fb63-2bb4-4949-b567-ae619a7925af\" (UID: \"c5d4fb63-2bb4-4949-b567-ae619a7925af\") " Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.580684 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74chp\" (UniqueName: \"kubernetes.io/projected/c5d4fb63-2bb4-4949-b567-ae619a7925af-kube-api-access-74chp\") pod \"c5d4fb63-2bb4-4949-b567-ae619a7925af\" (UID: \"c5d4fb63-2bb4-4949-b567-ae619a7925af\") " Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.581009 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5d4fb63-2bb4-4949-b567-ae619a7925af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5d4fb63-2bb4-4949-b567-ae619a7925af" (UID: "c5d4fb63-2bb4-4949-b567-ae619a7925af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.581041 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49f94a84-88d4-4ac3-aa57-6c82d914b6e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.581097 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd8k2\" (UniqueName: \"kubernetes.io/projected/49f94a84-88d4-4ac3-aa57-6c82d914b6e7-kube-api-access-dd8k2\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.583948 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d4fb63-2bb4-4949-b567-ae619a7925af-kube-api-access-74chp" (OuterVolumeSpecName: "kube-api-access-74chp") pod "c5d4fb63-2bb4-4949-b567-ae619a7925af" (UID: "c5d4fb63-2bb4-4949-b567-ae619a7925af"). InnerVolumeSpecName "kube-api-access-74chp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.682756 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74chp\" (UniqueName: \"kubernetes.io/projected/c5d4fb63-2bb4-4949-b567-ae619a7925af-kube-api-access-74chp\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.682789 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5d4fb63-2bb4-4949-b567-ae619a7925af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.837072 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-62wq6" event={"ID":"49f94a84-88d4-4ac3-aa57-6c82d914b6e7","Type":"ContainerDied","Data":"f01978611a02b6ce460012bbbffc2abaa06a397b296d230406424fa4f5264b1a"} Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.837102 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-62wq6" Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.837111 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f01978611a02b6ce460012bbbffc2abaa06a397b296d230406424fa4f5264b1a" Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.838636 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-crwnd" event={"ID":"c5d4fb63-2bb4-4949-b567-ae619a7925af","Type":"ContainerDied","Data":"1f528cd6a4f64df884a4a9bb0cdff2b48c4a782b6aef32ec2233b198135ea54d"} Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.838654 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f528cd6a4f64df884a4a9bb0cdff2b48c4a782b6aef32ec2233b198135ea54d" Jan 27 13:38:44 crc kubenswrapper[4786]: I0127 13:38:44.838788 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-crwnd" Jan 27 13:38:45 crc kubenswrapper[4786]: E0127 13:38:45.100590 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:45 crc kubenswrapper[4786]: E0127 13:38:45.102816 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:45 crc kubenswrapper[4786]: E0127 13:38:45.104104 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:45 crc kubenswrapper[4786]: E0127 13:38:45.104201 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.629669 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.787750 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-30f1-account-create-update-lccvr" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.803933 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad4b32dd-6140-457a-88f3-b885d0a62c1e-operator-scripts\") pod \"ad4b32dd-6140-457a-88f3-b885d0a62c1e\" (UID: \"ad4b32dd-6140-457a-88f3-b885d0a62c1e\") " Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.803984 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfdrt\" (UniqueName: \"kubernetes.io/projected/ad4b32dd-6140-457a-88f3-b885d0a62c1e-kube-api-access-hfdrt\") pod \"ad4b32dd-6140-457a-88f3-b885d0a62c1e\" (UID: \"ad4b32dd-6140-457a-88f3-b885d0a62c1e\") " Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.804850 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad4b32dd-6140-457a-88f3-b885d0a62c1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad4b32dd-6140-457a-88f3-b885d0a62c1e" (UID: "ad4b32dd-6140-457a-88f3-b885d0a62c1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.805261 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.811153 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad4b32dd-6140-457a-88f3-b885d0a62c1e-kube-api-access-hfdrt" (OuterVolumeSpecName: "kube-api-access-hfdrt") pod "ad4b32dd-6140-457a-88f3-b885d0a62c1e" (UID: "ad4b32dd-6140-457a-88f3-b885d0a62c1e"). InnerVolumeSpecName "kube-api-access-hfdrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.813499 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-f6c4f" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.848756 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.848762 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c" event={"ID":"ad4b32dd-6140-457a-88f3-b885d0a62c1e","Type":"ContainerDied","Data":"796bddf9121482e29774d07d13a70d8d52c636556a276b557a7472555bb53263"} Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.848808 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="796bddf9121482e29774d07d13a70d8d52c636556a276b557a7472555bb53263" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.855852 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-30f1-account-create-update-lccvr" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.855888 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-30f1-account-create-update-lccvr" event={"ID":"f79b70f2-36e5-4532-bf69-a70a865afe9d","Type":"ContainerDied","Data":"a6b405101cc8c7fc3987427fc9807f2bba86a85b0b26769e45aa5029e73a2f8d"} Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.855932 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b405101cc8c7fc3987427fc9807f2bba86a85b0b26769e45aa5029e73a2f8d" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.857677 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-f6c4f" event={"ID":"4c686636-04e0-4ea9-beda-5dfd5c05b477","Type":"ContainerDied","Data":"39973ee9fa34e6952eeb57ff931eb5576c0d278c8668f18d73ef9ea870320a13"} Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.857716 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39973ee9fa34e6952eeb57ff931eb5576c0d278c8668f18d73ef9ea870320a13" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.857725 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-f6c4f" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.859222 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z" event={"ID":"f2303d07-2542-43fe-9b22-243c5daa607b","Type":"ContainerDied","Data":"e9845827e3e17970e5716438d68f5cb2f314412a3f6a5bc5363913052bb68f7c"} Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.859244 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9845827e3e17970e5716438d68f5cb2f314412a3f6a5bc5363913052bb68f7c" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.859248 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.904733 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2303d07-2542-43fe-9b22-243c5daa607b-operator-scripts\") pod \"f2303d07-2542-43fe-9b22-243c5daa607b\" (UID: \"f2303d07-2542-43fe-9b22-243c5daa607b\") " Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.904819 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f79b70f2-36e5-4532-bf69-a70a865afe9d-operator-scripts\") pod \"f79b70f2-36e5-4532-bf69-a70a865afe9d\" (UID: \"f79b70f2-36e5-4532-bf69-a70a865afe9d\") " Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.904856 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph997\" (UniqueName: \"kubernetes.io/projected/f2303d07-2542-43fe-9b22-243c5daa607b-kube-api-access-ph997\") pod \"f2303d07-2542-43fe-9b22-243c5daa607b\" (UID: \"f2303d07-2542-43fe-9b22-243c5daa607b\") " Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.904944 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4k2q\" (UniqueName: \"kubernetes.io/projected/f79b70f2-36e5-4532-bf69-a70a865afe9d-kube-api-access-l4k2q\") pod \"f79b70f2-36e5-4532-bf69-a70a865afe9d\" (UID: \"f79b70f2-36e5-4532-bf69-a70a865afe9d\") " Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.904982 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw2fc\" (UniqueName: \"kubernetes.io/projected/4c686636-04e0-4ea9-beda-5dfd5c05b477-kube-api-access-fw2fc\") pod \"4c686636-04e0-4ea9-beda-5dfd5c05b477\" (UID: \"4c686636-04e0-4ea9-beda-5dfd5c05b477\") " Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.905037 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c686636-04e0-4ea9-beda-5dfd5c05b477-operator-scripts\") pod \"4c686636-04e0-4ea9-beda-5dfd5c05b477\" (UID: \"4c686636-04e0-4ea9-beda-5dfd5c05b477\") " Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.905278 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2303d07-2542-43fe-9b22-243c5daa607b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2303d07-2542-43fe-9b22-243c5daa607b" (UID: "f2303d07-2542-43fe-9b22-243c5daa607b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.905339 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad4b32dd-6140-457a-88f3-b885d0a62c1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.905353 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfdrt\" (UniqueName: \"kubernetes.io/projected/ad4b32dd-6140-457a-88f3-b885d0a62c1e-kube-api-access-hfdrt\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.905384 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79b70f2-36e5-4532-bf69-a70a865afe9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f79b70f2-36e5-4532-bf69-a70a865afe9d" (UID: "f79b70f2-36e5-4532-bf69-a70a865afe9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.905753 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c686636-04e0-4ea9-beda-5dfd5c05b477-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c686636-04e0-4ea9-beda-5dfd5c05b477" (UID: "4c686636-04e0-4ea9-beda-5dfd5c05b477"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.907964 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2303d07-2542-43fe-9b22-243c5daa607b-kube-api-access-ph997" (OuterVolumeSpecName: "kube-api-access-ph997") pod "f2303d07-2542-43fe-9b22-243c5daa607b" (UID: "f2303d07-2542-43fe-9b22-243c5daa607b"). InnerVolumeSpecName "kube-api-access-ph997". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.908009 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c686636-04e0-4ea9-beda-5dfd5c05b477-kube-api-access-fw2fc" (OuterVolumeSpecName: "kube-api-access-fw2fc") pod "4c686636-04e0-4ea9-beda-5dfd5c05b477" (UID: "4c686636-04e0-4ea9-beda-5dfd5c05b477"). InnerVolumeSpecName "kube-api-access-fw2fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:45 crc kubenswrapper[4786]: I0127 13:38:45.908006 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79b70f2-36e5-4532-bf69-a70a865afe9d-kube-api-access-l4k2q" (OuterVolumeSpecName: "kube-api-access-l4k2q") pod "f79b70f2-36e5-4532-bf69-a70a865afe9d" (UID: "f79b70f2-36e5-4532-bf69-a70a865afe9d"). InnerVolumeSpecName "kube-api-access-l4k2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:46 crc kubenswrapper[4786]: I0127 13:38:46.006429 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2303d07-2542-43fe-9b22-243c5daa607b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:46 crc kubenswrapper[4786]: I0127 13:38:46.006459 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f79b70f2-36e5-4532-bf69-a70a865afe9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:46 crc kubenswrapper[4786]: I0127 13:38:46.006468 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph997\" (UniqueName: \"kubernetes.io/projected/f2303d07-2542-43fe-9b22-243c5daa607b-kube-api-access-ph997\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:46 crc kubenswrapper[4786]: I0127 13:38:46.006480 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4k2q\" (UniqueName: \"kubernetes.io/projected/f79b70f2-36e5-4532-bf69-a70a865afe9d-kube-api-access-l4k2q\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:46 crc kubenswrapper[4786]: I0127 13:38:46.006490 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw2fc\" (UniqueName: \"kubernetes.io/projected/4c686636-04e0-4ea9-beda-5dfd5c05b477-kube-api-access-fw2fc\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:46 crc kubenswrapper[4786]: I0127 13:38:46.006498 4786 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c686636-04e0-4ea9-beda-5dfd5c05b477-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.018045 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s"] Jan 27 13:38:47 crc kubenswrapper[4786]: E0127 13:38:47.018790 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2303d07-2542-43fe-9b22-243c5daa607b" containerName="mariadb-account-create-update" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.018807 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2303d07-2542-43fe-9b22-243c5daa607b" containerName="mariadb-account-create-update" Jan 27 13:38:47 crc kubenswrapper[4786]: E0127 13:38:47.018830 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d4fb63-2bb4-4949-b567-ae619a7925af" containerName="mariadb-database-create" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.018837 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d4fb63-2bb4-4949-b567-ae619a7925af" containerName="mariadb-database-create" Jan 27 13:38:47 crc kubenswrapper[4786]: E0127 13:38:47.018852 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79b70f2-36e5-4532-bf69-a70a865afe9d" containerName="mariadb-account-create-update" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.018860 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79b70f2-36e5-4532-bf69-a70a865afe9d" containerName="mariadb-account-create-update" Jan 27 13:38:47 crc kubenswrapper[4786]: E0127 13:38:47.018873 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad4b32dd-6140-457a-88f3-b885d0a62c1e" containerName="mariadb-account-create-update" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.018882 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad4b32dd-6140-457a-88f3-b885d0a62c1e" containerName="mariadb-account-create-update" Jan 27 13:38:47 crc kubenswrapper[4786]: E0127 13:38:47.018904 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f94a84-88d4-4ac3-aa57-6c82d914b6e7" containerName="mariadb-database-create" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.018913 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f94a84-88d4-4ac3-aa57-6c82d914b6e7" containerName="mariadb-database-create" Jan 27 13:38:47 crc kubenswrapper[4786]: E0127 13:38:47.018936 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c686636-04e0-4ea9-beda-5dfd5c05b477" containerName="mariadb-database-create" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.018943 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c686636-04e0-4ea9-beda-5dfd5c05b477" containerName="mariadb-database-create" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.019152 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79b70f2-36e5-4532-bf69-a70a865afe9d" containerName="mariadb-account-create-update" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.019169 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c686636-04e0-4ea9-beda-5dfd5c05b477" containerName="mariadb-database-create" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.019185 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d4fb63-2bb4-4949-b567-ae619a7925af" containerName="mariadb-database-create" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.019200 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad4b32dd-6140-457a-88f3-b885d0a62c1e" containerName="mariadb-account-create-update" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.019224 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2303d07-2542-43fe-9b22-243c5daa607b" containerName="mariadb-account-create-update" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.019234 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f94a84-88d4-4ac3-aa57-6c82d914b6e7" containerName="mariadb-database-create" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.019943 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.024151 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.025943 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.026419 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-twlpf" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.030682 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s"] Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.121024 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-lm94s\" (UID: \"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.121124 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-lm94s\" (UID: \"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.121159 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjbl8\" (UniqueName: \"kubernetes.io/projected/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-kube-api-access-jjbl8\") pod \"nova-kuttl-cell0-conductor-db-sync-lm94s\" (UID: \"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.222270 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-lm94s\" (UID: \"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.222375 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-lm94s\" (UID: \"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.222403 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjbl8\" (UniqueName: \"kubernetes.io/projected/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-kube-api-access-jjbl8\") pod \"nova-kuttl-cell0-conductor-db-sync-lm94s\" (UID: \"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.226322 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-lm94s\" (UID: \"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.228272 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-lm94s\" (UID: \"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.243313 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjbl8\" (UniqueName: \"kubernetes.io/projected/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-kube-api-access-jjbl8\") pod \"nova-kuttl-cell0-conductor-db-sync-lm94s\" (UID: \"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.337368 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.794370 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s"] Jan 27 13:38:47 crc kubenswrapper[4786]: W0127 13:38:47.798693 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb52037f1_e48f_46e3_a5bc_a6ffa2fc7541.slice/crio-470cf011ad5c4f3771ee852d249de320c0ecca40b04bdc793b03616be092c3db WatchSource:0}: Error finding container 470cf011ad5c4f3771ee852d249de320c0ecca40b04bdc793b03616be092c3db: Status 404 returned error can't find the container with id 470cf011ad5c4f3771ee852d249de320c0ecca40b04bdc793b03616be092c3db Jan 27 13:38:47 crc kubenswrapper[4786]: I0127 13:38:47.881434 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" event={"ID":"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541","Type":"ContainerStarted","Data":"470cf011ad5c4f3771ee852d249de320c0ecca40b04bdc793b03616be092c3db"} Jan 27 13:38:48 crc kubenswrapper[4786]: I0127 13:38:48.891893 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" event={"ID":"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541","Type":"ContainerStarted","Data":"4abaa6688014c3919443064c4f77ae8bda6701f8a84ab8cf7e192e3959245d65"} Jan 27 13:38:48 crc kubenswrapper[4786]: I0127 13:38:48.907557 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" podStartSLOduration=1.9075390209999998 podStartE2EDuration="1.907539021s" podCreationTimestamp="2026-01-27 13:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:38:48.904744025 +0000 UTC m=+1912.115358164" watchObservedRunningTime="2026-01-27 13:38:48.907539021 +0000 UTC m=+1912.118153150" Jan 27 13:38:49 crc kubenswrapper[4786]: E0127 13:38:49.456417 4786 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 27 13:38:49 crc kubenswrapper[4786]: E0127 13:38:49.456512 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data podName:3cf31c53-34d5-4acc-b59e-096cfe798213 nodeName:}" failed. No retries permitted until 2026-01-27 13:39:05.456485161 +0000 UTC m=+1928.667099280 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "3cf31c53-34d5-4acc-b59e-096cfe798213") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 27 13:38:50 crc kubenswrapper[4786]: E0127 13:38:50.099733 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:50 crc kubenswrapper[4786]: E0127 13:38:50.103609 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:50 crc kubenswrapper[4786]: E0127 13:38:50.105092 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:50 crc kubenswrapper[4786]: E0127 13:38:50.105138 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:38:52 crc kubenswrapper[4786]: I0127 13:38:52.922090 4786 generic.go:334] "Generic (PLEG): container finished" podID="b52037f1-e48f-46e3-a5bc-a6ffa2fc7541" containerID="4abaa6688014c3919443064c4f77ae8bda6701f8a84ab8cf7e192e3959245d65" exitCode=0 Jan 27 13:38:52 crc kubenswrapper[4786]: I0127 13:38:52.922207 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" event={"ID":"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541","Type":"ContainerDied","Data":"4abaa6688014c3919443064c4f77ae8bda6701f8a84ab8cf7e192e3959245d65"} Jan 27 13:38:54 crc kubenswrapper[4786]: I0127 13:38:54.229853 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" Jan 27 13:38:54 crc kubenswrapper[4786]: I0127 13:38:54.327216 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-config-data\") pod \"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541\" (UID: \"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541\") " Jan 27 13:38:54 crc kubenswrapper[4786]: I0127 13:38:54.327372 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjbl8\" (UniqueName: \"kubernetes.io/projected/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-kube-api-access-jjbl8\") pod \"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541\" (UID: \"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541\") " Jan 27 13:38:54 crc kubenswrapper[4786]: I0127 13:38:54.327471 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-scripts\") pod \"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541\" (UID: \"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541\") " Jan 27 13:38:54 crc kubenswrapper[4786]: I0127 13:38:54.332048 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-scripts" (OuterVolumeSpecName: "scripts") pod "b52037f1-e48f-46e3-a5bc-a6ffa2fc7541" (UID: "b52037f1-e48f-46e3-a5bc-a6ffa2fc7541"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:38:54 crc kubenswrapper[4786]: I0127 13:38:54.332160 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-kube-api-access-jjbl8" (OuterVolumeSpecName: "kube-api-access-jjbl8") pod "b52037f1-e48f-46e3-a5bc-a6ffa2fc7541" (UID: "b52037f1-e48f-46e3-a5bc-a6ffa2fc7541"). InnerVolumeSpecName "kube-api-access-jjbl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:38:54 crc kubenswrapper[4786]: I0127 13:38:54.354518 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-config-data" (OuterVolumeSpecName: "config-data") pod "b52037f1-e48f-46e3-a5bc-a6ffa2fc7541" (UID: "b52037f1-e48f-46e3-a5bc-a6ffa2fc7541"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:38:54 crc kubenswrapper[4786]: I0127 13:38:54.429477 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjbl8\" (UniqueName: \"kubernetes.io/projected/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-kube-api-access-jjbl8\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:54 crc kubenswrapper[4786]: I0127 13:38:54.429513 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:54 crc kubenswrapper[4786]: I0127 13:38:54.429526 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:38:54 crc kubenswrapper[4786]: I0127 13:38:54.938387 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" event={"ID":"b52037f1-e48f-46e3-a5bc-a6ffa2fc7541","Type":"ContainerDied","Data":"470cf011ad5c4f3771ee852d249de320c0ecca40b04bdc793b03616be092c3db"} Jan 27 13:38:54 crc kubenswrapper[4786]: I0127 13:38:54.938437 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="470cf011ad5c4f3771ee852d249de320c0ecca40b04bdc793b03616be092c3db" Jan 27 13:38:54 crc kubenswrapper[4786]: I0127 13:38:54.938447 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s" Jan 27 13:38:55 crc kubenswrapper[4786]: I0127 13:38:55.051306 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:38:55 crc kubenswrapper[4786]: E0127 13:38:55.051652 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52037f1-e48f-46e3-a5bc-a6ffa2fc7541" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 27 13:38:55 crc kubenswrapper[4786]: I0127 13:38:55.051669 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52037f1-e48f-46e3-a5bc-a6ffa2fc7541" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 27 13:38:55 crc kubenswrapper[4786]: I0127 13:38:55.051848 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b52037f1-e48f-46e3-a5bc-a6ffa2fc7541" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 27 13:38:55 crc kubenswrapper[4786]: I0127 13:38:55.052394 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:55 crc kubenswrapper[4786]: I0127 13:38:55.056559 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 27 13:38:55 crc kubenswrapper[4786]: I0127 13:38:55.056765 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-twlpf" Jan 27 13:38:55 crc kubenswrapper[4786]: I0127 13:38:55.068973 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:38:55 crc kubenswrapper[4786]: E0127 13:38:55.103173 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:55 crc kubenswrapper[4786]: E0127 13:38:55.112018 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:55 crc kubenswrapper[4786]: E0127 13:38:55.121415 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:38:55 crc kubenswrapper[4786]: E0127 13:38:55.121516 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:38:55 crc kubenswrapper[4786]: I0127 13:38:55.141116 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x77j\" (UniqueName: \"kubernetes.io/projected/10c3eb0f-3265-4520-afd7-0e002bcc5b81-kube-api-access-6x77j\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"10c3eb0f-3265-4520-afd7-0e002bcc5b81\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:55 crc kubenswrapper[4786]: I0127 13:38:55.141215 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c3eb0f-3265-4520-afd7-0e002bcc5b81-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"10c3eb0f-3265-4520-afd7-0e002bcc5b81\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:55 crc kubenswrapper[4786]: I0127 13:38:55.242751 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x77j\" (UniqueName: \"kubernetes.io/projected/10c3eb0f-3265-4520-afd7-0e002bcc5b81-kube-api-access-6x77j\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"10c3eb0f-3265-4520-afd7-0e002bcc5b81\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:55 crc kubenswrapper[4786]: I0127 13:38:55.242853 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c3eb0f-3265-4520-afd7-0e002bcc5b81-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"10c3eb0f-3265-4520-afd7-0e002bcc5b81\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:55 crc kubenswrapper[4786]: I0127 13:38:55.247404 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c3eb0f-3265-4520-afd7-0e002bcc5b81-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"10c3eb0f-3265-4520-afd7-0e002bcc5b81\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:55 crc kubenswrapper[4786]: I0127 13:38:55.261834 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x77j\" (UniqueName: \"kubernetes.io/projected/10c3eb0f-3265-4520-afd7-0e002bcc5b81-kube-api-access-6x77j\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"10c3eb0f-3265-4520-afd7-0e002bcc5b81\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:55 crc kubenswrapper[4786]: I0127 13:38:55.366535 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:55 crc kubenswrapper[4786]: I0127 13:38:55.846628 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 27 13:38:55 crc kubenswrapper[4786]: I0127 13:38:55.948632 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"10c3eb0f-3265-4520-afd7-0e002bcc5b81","Type":"ContainerStarted","Data":"9dbaf5ec390c2d4fd2d2fcddedc5526d22af598e7539eaa4fef9c022b0f50ff4"} Jan 27 13:38:56 crc kubenswrapper[4786]: I0127 13:38:56.957107 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"10c3eb0f-3265-4520-afd7-0e002bcc5b81","Type":"ContainerStarted","Data":"e6b72c9836fde863b355bc9c649a0e3786912ef11f75bb65ee892bd57c0834d8"} Jan 27 13:38:56 crc kubenswrapper[4786]: I0127 13:38:56.958322 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:38:56 crc kubenswrapper[4786]: I0127 13:38:56.980769 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=1.9807470820000002 podStartE2EDuration="1.980747082s" podCreationTimestamp="2026-01-27 13:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:38:56.971686784 +0000 UTC m=+1920.182300913" watchObservedRunningTime="2026-01-27 13:38:56.980747082 +0000 UTC m=+1920.191361221" Jan 27 13:39:00 crc kubenswrapper[4786]: E0127 13:39:00.098811 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:39:00 crc kubenswrapper[4786]: E0127 13:39:00.101104 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:39:00 crc kubenswrapper[4786]: E0127 13:39:00.102627 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:39:00 crc kubenswrapper[4786]: E0127 13:39:00.102658 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:39:05 crc kubenswrapper[4786]: E0127 13:39:05.098502 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:39:05 crc kubenswrapper[4786]: E0127 13:39:05.103379 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:39:05 crc kubenswrapper[4786]: E0127 13:39:05.104833 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 27 13:39:05 crc kubenswrapper[4786]: E0127 13:39:05.104916 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:39:05 crc kubenswrapper[4786]: I0127 13:39:05.392129 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 27 13:39:05 crc kubenswrapper[4786]: E0127 13:39:05.521687 4786 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 27 13:39:05 crc kubenswrapper[4786]: E0127 13:39:05.521770 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data podName:3cf31c53-34d5-4acc-b59e-096cfe798213 nodeName:}" failed. No retries permitted until 2026-01-27 13:39:37.521752392 +0000 UTC m=+1960.732366511 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "3cf31c53-34d5-4acc-b59e-096cfe798213") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 27 13:39:05 crc kubenswrapper[4786]: I0127 13:39:05.860738 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7"] Jan 27 13:39:05 crc kubenswrapper[4786]: I0127 13:39:05.861897 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" Jan 27 13:39:05 crc kubenswrapper[4786]: I0127 13:39:05.865060 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 27 13:39:05 crc kubenswrapper[4786]: I0127 13:39:05.865136 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 27 13:39:05 crc kubenswrapper[4786]: I0127 13:39:05.887706 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7"] Jan 27 13:39:05 crc kubenswrapper[4786]: I0127 13:39:05.931007 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd6wl\" (UniqueName: \"kubernetes.io/projected/e4df6489-80cb-45c8-90b2-7fd2e9bca103-kube-api-access-wd6wl\") pod \"nova-kuttl-cell0-cell-mapping-j2ns7\" (UID: \"e4df6489-80cb-45c8-90b2-7fd2e9bca103\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" Jan 27 13:39:05 crc kubenswrapper[4786]: I0127 13:39:05.931433 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4df6489-80cb-45c8-90b2-7fd2e9bca103-config-data\") pod \"nova-kuttl-cell0-cell-mapping-j2ns7\" (UID: \"e4df6489-80cb-45c8-90b2-7fd2e9bca103\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" Jan 27 13:39:05 crc kubenswrapper[4786]: I0127 13:39:05.931527 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4df6489-80cb-45c8-90b2-7fd2e9bca103-scripts\") pod \"nova-kuttl-cell0-cell-mapping-j2ns7\" (UID: \"e4df6489-80cb-45c8-90b2-7fd2e9bca103\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.032728 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd6wl\" (UniqueName: \"kubernetes.io/projected/e4df6489-80cb-45c8-90b2-7fd2e9bca103-kube-api-access-wd6wl\") pod \"nova-kuttl-cell0-cell-mapping-j2ns7\" (UID: \"e4df6489-80cb-45c8-90b2-7fd2e9bca103\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.032862 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4df6489-80cb-45c8-90b2-7fd2e9bca103-config-data\") pod \"nova-kuttl-cell0-cell-mapping-j2ns7\" (UID: \"e4df6489-80cb-45c8-90b2-7fd2e9bca103\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.032885 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4df6489-80cb-45c8-90b2-7fd2e9bca103-scripts\") pod \"nova-kuttl-cell0-cell-mapping-j2ns7\" (UID: \"e4df6489-80cb-45c8-90b2-7fd2e9bca103\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.050996 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4df6489-80cb-45c8-90b2-7fd2e9bca103-scripts\") pod \"nova-kuttl-cell0-cell-mapping-j2ns7\" (UID: \"e4df6489-80cb-45c8-90b2-7fd2e9bca103\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.051161 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4df6489-80cb-45c8-90b2-7fd2e9bca103-config-data\") pod \"nova-kuttl-cell0-cell-mapping-j2ns7\" (UID: \"e4df6489-80cb-45c8-90b2-7fd2e9bca103\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.052960 4786 generic.go:334] "Generic (PLEG): container finished" podID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" exitCode=137 Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.053013 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"3cf31c53-34d5-4acc-b59e-096cfe798213","Type":"ContainerDied","Data":"7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07"} Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.053050 4786 scope.go:117] "RemoveContainer" containerID="f4981bf8a8c2a954fa9a41a005d9f10fea717f207ce095af9a61a8b49b2cf2b5" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.054533 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd6wl\" (UniqueName: \"kubernetes.io/projected/e4df6489-80cb-45c8-90b2-7fd2e9bca103-kube-api-access-wd6wl\") pod \"nova-kuttl-cell0-cell-mapping-j2ns7\" (UID: \"e4df6489-80cb-45c8-90b2-7fd2e9bca103\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.127340 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.141572 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.143391 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.145175 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.174792 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.175978 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.176850 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.181662 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.201800 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.229345 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.230415 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.239993 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.242229 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.305640 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.307048 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.311438 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.311643 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.343169 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6a703a-c54b-46ed-98d7-72d3e821bb7f-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"5a6a703a-c54b-46ed-98d7-72d3e821bb7f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.349881 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5af74df-03d5-46e9-b12f-9b2b948381d2-config-data\") pod \"nova-kuttl-api-0\" (UID: \"a5af74df-03d5-46e9-b12f-9b2b948381d2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.350160 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a40bfc8-a365-4329-8362-ecd8b784f52d-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"0a40bfc8-a365-4329-8362-ecd8b784f52d\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.350238 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfsz7\" (UniqueName: \"kubernetes.io/projected/a5af74df-03d5-46e9-b12f-9b2b948381d2-kube-api-access-vfsz7\") pod \"nova-kuttl-api-0\" (UID: \"a5af74df-03d5-46e9-b12f-9b2b948381d2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.350278 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tgbg\" (UniqueName: \"kubernetes.io/projected/5a6a703a-c54b-46ed-98d7-72d3e821bb7f-kube-api-access-5tgbg\") pod \"nova-kuttl-scheduler-0\" (UID: \"5a6a703a-c54b-46ed-98d7-72d3e821bb7f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.350305 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5af74df-03d5-46e9-b12f-9b2b948381d2-logs\") pod \"nova-kuttl-api-0\" (UID: \"a5af74df-03d5-46e9-b12f-9b2b948381d2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.350373 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9f7t\" (UniqueName: \"kubernetes.io/projected/0a40bfc8-a365-4329-8362-ecd8b784f52d-kube-api-access-j9f7t\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"0a40bfc8-a365-4329-8362-ecd8b784f52d\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.358333 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.451534 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data\") pod \"3cf31c53-34d5-4acc-b59e-096cfe798213\" (UID: \"3cf31c53-34d5-4acc-b59e-096cfe798213\") " Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.451886 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq98v\" (UniqueName: \"kubernetes.io/projected/3cf31c53-34d5-4acc-b59e-096cfe798213-kube-api-access-kq98v\") pod \"3cf31c53-34d5-4acc-b59e-096cfe798213\" (UID: \"3cf31c53-34d5-4acc-b59e-096cfe798213\") " Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.452175 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08918475-de15-4f9a-96a3-450082e907e6-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"08918475-de15-4f9a-96a3-450082e907e6\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.452202 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6a703a-c54b-46ed-98d7-72d3e821bb7f-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"5a6a703a-c54b-46ed-98d7-72d3e821bb7f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.452226 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5af74df-03d5-46e9-b12f-9b2b948381d2-config-data\") pod \"nova-kuttl-api-0\" (UID: \"a5af74df-03d5-46e9-b12f-9b2b948381d2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.452294 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a40bfc8-a365-4329-8362-ecd8b784f52d-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"0a40bfc8-a365-4329-8362-ecd8b784f52d\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.452310 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfsz7\" (UniqueName: \"kubernetes.io/projected/a5af74df-03d5-46e9-b12f-9b2b948381d2-kube-api-access-vfsz7\") pod \"nova-kuttl-api-0\" (UID: \"a5af74df-03d5-46e9-b12f-9b2b948381d2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.452331 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tgbg\" (UniqueName: \"kubernetes.io/projected/5a6a703a-c54b-46ed-98d7-72d3e821bb7f-kube-api-access-5tgbg\") pod \"nova-kuttl-scheduler-0\" (UID: \"5a6a703a-c54b-46ed-98d7-72d3e821bb7f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.452348 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5af74df-03d5-46e9-b12f-9b2b948381d2-logs\") pod \"nova-kuttl-api-0\" (UID: \"a5af74df-03d5-46e9-b12f-9b2b948381d2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.452363 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9f7t\" (UniqueName: \"kubernetes.io/projected/0a40bfc8-a365-4329-8362-ecd8b784f52d-kube-api-access-j9f7t\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"0a40bfc8-a365-4329-8362-ecd8b784f52d\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.452392 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vzq2\" (UniqueName: \"kubernetes.io/projected/08918475-de15-4f9a-96a3-450082e907e6-kube-api-access-8vzq2\") pod \"nova-kuttl-metadata-0\" (UID: \"08918475-de15-4f9a-96a3-450082e907e6\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.452428 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08918475-de15-4f9a-96a3-450082e907e6-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"08918475-de15-4f9a-96a3-450082e907e6\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.453655 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5af74df-03d5-46e9-b12f-9b2b948381d2-logs\") pod \"nova-kuttl-api-0\" (UID: \"a5af74df-03d5-46e9-b12f-9b2b948381d2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.457827 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf31c53-34d5-4acc-b59e-096cfe798213-kube-api-access-kq98v" (OuterVolumeSpecName: "kube-api-access-kq98v") pod "3cf31c53-34d5-4acc-b59e-096cfe798213" (UID: "3cf31c53-34d5-4acc-b59e-096cfe798213"). InnerVolumeSpecName "kube-api-access-kq98v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.463433 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6a703a-c54b-46ed-98d7-72d3e821bb7f-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"5a6a703a-c54b-46ed-98d7-72d3e821bb7f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.466265 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a40bfc8-a365-4329-8362-ecd8b784f52d-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"0a40bfc8-a365-4329-8362-ecd8b784f52d\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.466382 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5af74df-03d5-46e9-b12f-9b2b948381d2-config-data\") pod \"nova-kuttl-api-0\" (UID: \"a5af74df-03d5-46e9-b12f-9b2b948381d2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.471194 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfsz7\" (UniqueName: \"kubernetes.io/projected/a5af74df-03d5-46e9-b12f-9b2b948381d2-kube-api-access-vfsz7\") pod \"nova-kuttl-api-0\" (UID: \"a5af74df-03d5-46e9-b12f-9b2b948381d2\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.472316 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tgbg\" (UniqueName: \"kubernetes.io/projected/5a6a703a-c54b-46ed-98d7-72d3e821bb7f-kube-api-access-5tgbg\") pod \"nova-kuttl-scheduler-0\" (UID: \"5a6a703a-c54b-46ed-98d7-72d3e821bb7f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.472765 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9f7t\" (UniqueName: \"kubernetes.io/projected/0a40bfc8-a365-4329-8362-ecd8b784f52d-kube-api-access-j9f7t\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"0a40bfc8-a365-4329-8362-ecd8b784f52d\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.482616 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data" (OuterVolumeSpecName: "config-data") pod "3cf31c53-34d5-4acc-b59e-096cfe798213" (UID: "3cf31c53-34d5-4acc-b59e-096cfe798213"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.512749 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.554527 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vzq2\" (UniqueName: \"kubernetes.io/projected/08918475-de15-4f9a-96a3-450082e907e6-kube-api-access-8vzq2\") pod \"nova-kuttl-metadata-0\" (UID: \"08918475-de15-4f9a-96a3-450082e907e6\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.554616 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08918475-de15-4f9a-96a3-450082e907e6-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"08918475-de15-4f9a-96a3-450082e907e6\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.554655 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08918475-de15-4f9a-96a3-450082e907e6-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"08918475-de15-4f9a-96a3-450082e907e6\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.554788 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cf31c53-34d5-4acc-b59e-096cfe798213-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.554803 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq98v\" (UniqueName: \"kubernetes.io/projected/3cf31c53-34d5-4acc-b59e-096cfe798213-kube-api-access-kq98v\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.556122 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08918475-de15-4f9a-96a3-450082e907e6-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"08918475-de15-4f9a-96a3-450082e907e6\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.559561 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08918475-de15-4f9a-96a3-450082e907e6-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"08918475-de15-4f9a-96a3-450082e907e6\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.571056 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vzq2\" (UniqueName: \"kubernetes.io/projected/08918475-de15-4f9a-96a3-450082e907e6-kube-api-access-8vzq2\") pod \"nova-kuttl-metadata-0\" (UID: \"08918475-de15-4f9a-96a3-450082e907e6\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.626145 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.653221 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.764709 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.798633 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7"] Jan 27 13:39:06 crc kubenswrapper[4786]: W0127 13:39:06.960229 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a40bfc8_a365_4329_8362_ecd8b784f52d.slice/crio-d03459970dd163253c26a878705e794ad9395feaa69d8566814b0ae96719992b WatchSource:0}: Error finding container d03459970dd163253c26a878705e794ad9395feaa69d8566814b0ae96719992b: Status 404 returned error can't find the container with id d03459970dd163253c26a878705e794ad9395feaa69d8566814b0ae96719992b Jan 27 13:39:06 crc kubenswrapper[4786]: I0127 13:39:06.962972 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.065142 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"3cf31c53-34d5-4acc-b59e-096cfe798213","Type":"ContainerDied","Data":"1595b9288597317f6121426d08312149b66926944c088505fee808b664be3cd7"} Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.065192 4786 scope.go:117] "RemoveContainer" containerID="7a8d504a24094c0a9dadddd97875ce7482cd0214e46e0fb4e03f736e3a9fea07" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.065307 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.070255 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"0a40bfc8-a365-4329-8362-ecd8b784f52d","Type":"ContainerStarted","Data":"d03459970dd163253c26a878705e794ad9395feaa69d8566814b0ae96719992b"} Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.074491 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" event={"ID":"e4df6489-80cb-45c8-90b2-7fd2e9bca103","Type":"ContainerStarted","Data":"b149f864946bbb75733731eb4c8a2d8304720f30dcdbc16ab982d29d51d7f9c6"} Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.074538 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" event={"ID":"e4df6489-80cb-45c8-90b2-7fd2e9bca103","Type":"ContainerStarted","Data":"2e45031dfd03670e832293ae9dde915f9edd4a6b1b084a4540b727c68c579e24"} Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.122782 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" podStartSLOduration=2.122757367 podStartE2EDuration="2.122757367s" podCreationTimestamp="2026-01-27 13:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:39:07.104574359 +0000 UTC m=+1930.315188488" watchObservedRunningTime="2026-01-27 13:39:07.122757367 +0000 UTC m=+1930.333371486" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.133718 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.149433 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.163036 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.170589 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj"] Jan 27 13:39:07 crc kubenswrapper[4786]: E0127 13:39:07.170992 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.171014 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:39:07 crc kubenswrapper[4786]: E0127 13:39:07.171092 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.171142 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.171428 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.171520 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.171539 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.172168 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.176381 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.182829 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.186196 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj"] Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.221397 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:39:07 crc kubenswrapper[4786]: W0127 13:39:07.233577 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08918475_de15_4f9a_96a3_450082e907e6.slice/crio-d7a63ec2decf86f5bb2d79cf3cfb1601fe1d3c53d7c9cd9770580cb0ca3e8ade WatchSource:0}: Error finding container d7a63ec2decf86f5bb2d79cf3cfb1601fe1d3c53d7c9cd9770580cb0ca3e8ade: Status 404 returned error can't find the container with id d7a63ec2decf86f5bb2d79cf3cfb1601fe1d3c53d7c9cd9770580cb0ca3e8ade Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.268487 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac61271-09f6-4a27-bb20-edf0cb037d72-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-ssjwj\" (UID: \"fac61271-09f6-4a27-bb20-edf0cb037d72\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.269206 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac61271-09f6-4a27-bb20-edf0cb037d72-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-ssjwj\" (UID: \"fac61271-09f6-4a27-bb20-edf0cb037d72\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.269426 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c8ft\" (UniqueName: \"kubernetes.io/projected/fac61271-09f6-4a27-bb20-edf0cb037d72-kube-api-access-9c8ft\") pod \"nova-kuttl-cell1-conductor-db-sync-ssjwj\" (UID: \"fac61271-09f6-4a27-bb20-edf0cb037d72\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.298534 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.371336 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac61271-09f6-4a27-bb20-edf0cb037d72-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-ssjwj\" (UID: \"fac61271-09f6-4a27-bb20-edf0cb037d72\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.371631 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c8ft\" (UniqueName: \"kubernetes.io/projected/fac61271-09f6-4a27-bb20-edf0cb037d72-kube-api-access-9c8ft\") pod \"nova-kuttl-cell1-conductor-db-sync-ssjwj\" (UID: \"fac61271-09f6-4a27-bb20-edf0cb037d72\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.371778 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac61271-09f6-4a27-bb20-edf0cb037d72-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-ssjwj\" (UID: \"fac61271-09f6-4a27-bb20-edf0cb037d72\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.386905 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac61271-09f6-4a27-bb20-edf0cb037d72-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-ssjwj\" (UID: \"fac61271-09f6-4a27-bb20-edf0cb037d72\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.386967 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac61271-09f6-4a27-bb20-edf0cb037d72-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-ssjwj\" (UID: \"fac61271-09f6-4a27-bb20-edf0cb037d72\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.392375 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c8ft\" (UniqueName: \"kubernetes.io/projected/fac61271-09f6-4a27-bb20-edf0cb037d72-kube-api-access-9c8ft\") pod \"nova-kuttl-cell1-conductor-db-sync-ssjwj\" (UID: \"fac61271-09f6-4a27-bb20-edf0cb037d72\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.478198 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" path="/var/lib/kubelet/pods/3cf31c53-34d5-4acc-b59e-096cfe798213/volumes" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.487390 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" Jan 27 13:39:07 crc kubenswrapper[4786]: I0127 13:39:07.946363 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj"] Jan 27 13:39:08 crc kubenswrapper[4786]: I0127 13:39:08.105217 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" event={"ID":"fac61271-09f6-4a27-bb20-edf0cb037d72","Type":"ContainerStarted","Data":"a9df56fd21aaf4145b0de4d30ec00f4fd302be88c8047ecd4a9aad589a29d319"} Jan 27 13:39:08 crc kubenswrapper[4786]: I0127 13:39:08.107134 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"5a6a703a-c54b-46ed-98d7-72d3e821bb7f","Type":"ContainerStarted","Data":"fbfe810256ade72717132925281639c18155483dd3240fbf420dce7ba97d2b82"} Jan 27 13:39:08 crc kubenswrapper[4786]: I0127 13:39:08.107199 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"5a6a703a-c54b-46ed-98d7-72d3e821bb7f","Type":"ContainerStarted","Data":"9b825d2b8032f141ca4a82cb867005df9c63756b31a344173550f6b49650be72"} Jan 27 13:39:08 crc kubenswrapper[4786]: I0127 13:39:08.109880 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"0a40bfc8-a365-4329-8362-ecd8b784f52d","Type":"ContainerStarted","Data":"26c54080ad1b6c4f02c0dd5ee5b6e8e5a5b3c311bbfe17f9879301a00697d8b4"} Jan 27 13:39:08 crc kubenswrapper[4786]: I0127 13:39:08.113328 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08918475-de15-4f9a-96a3-450082e907e6","Type":"ContainerStarted","Data":"c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78"} Jan 27 13:39:08 crc kubenswrapper[4786]: I0127 13:39:08.113406 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08918475-de15-4f9a-96a3-450082e907e6","Type":"ContainerStarted","Data":"14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144"} Jan 27 13:39:08 crc kubenswrapper[4786]: I0127 13:39:08.113425 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08918475-de15-4f9a-96a3-450082e907e6","Type":"ContainerStarted","Data":"d7a63ec2decf86f5bb2d79cf3cfb1601fe1d3c53d7c9cd9770580cb0ca3e8ade"} Jan 27 13:39:08 crc kubenswrapper[4786]: I0127 13:39:08.116741 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a5af74df-03d5-46e9-b12f-9b2b948381d2","Type":"ContainerStarted","Data":"1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8"} Jan 27 13:39:08 crc kubenswrapper[4786]: I0127 13:39:08.116800 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a5af74df-03d5-46e9-b12f-9b2b948381d2","Type":"ContainerStarted","Data":"3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad"} Jan 27 13:39:08 crc kubenswrapper[4786]: I0127 13:39:08.116822 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a5af74df-03d5-46e9-b12f-9b2b948381d2","Type":"ContainerStarted","Data":"76cefee929b6dcfa0bd5e24973060530e6f408c4c37f1b907636405c2dc12b29"} Jan 27 13:39:08 crc kubenswrapper[4786]: I0127 13:39:08.126462 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.126440917 podStartE2EDuration="2.126440917s" podCreationTimestamp="2026-01-27 13:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:39:08.124951337 +0000 UTC m=+1931.335565446" watchObservedRunningTime="2026-01-27 13:39:08.126440917 +0000 UTC m=+1931.337055036" Jan 27 13:39:08 crc kubenswrapper[4786]: I0127 13:39:08.148157 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.1481346119999998 podStartE2EDuration="2.148134612s" podCreationTimestamp="2026-01-27 13:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:39:08.141460679 +0000 UTC m=+1931.352074808" watchObservedRunningTime="2026-01-27 13:39:08.148134612 +0000 UTC m=+1931.358748731" Jan 27 13:39:08 crc kubenswrapper[4786]: I0127 13:39:08.176062 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.176034876 podStartE2EDuration="2.176034876s" podCreationTimestamp="2026-01-27 13:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:39:08.162284569 +0000 UTC m=+1931.372898688" watchObservedRunningTime="2026-01-27 13:39:08.176034876 +0000 UTC m=+1931.386648995" Jan 27 13:39:08 crc kubenswrapper[4786]: I0127 13:39:08.191775 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.191759347 podStartE2EDuration="2.191759347s" podCreationTimestamp="2026-01-27 13:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:39:08.184889818 +0000 UTC m=+1931.395503937" watchObservedRunningTime="2026-01-27 13:39:08.191759347 +0000 UTC m=+1931.402373466" Jan 27 13:39:09 crc kubenswrapper[4786]: I0127 13:39:09.127842 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" event={"ID":"fac61271-09f6-4a27-bb20-edf0cb037d72","Type":"ContainerStarted","Data":"78034341a7536312ef1d85d81d892c553707c2d23f2eea20de173280f5ec812e"} Jan 27 13:39:09 crc kubenswrapper[4786]: I0127 13:39:09.145861 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" podStartSLOduration=2.145841118 podStartE2EDuration="2.145841118s" podCreationTimestamp="2026-01-27 13:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:39:09.139216887 +0000 UTC m=+1932.349831016" watchObservedRunningTime="2026-01-27 13:39:09.145841118 +0000 UTC m=+1932.356455237" Jan 27 13:39:11 crc kubenswrapper[4786]: I0127 13:39:11.144976 4786 generic.go:334] "Generic (PLEG): container finished" podID="fac61271-09f6-4a27-bb20-edf0cb037d72" containerID="78034341a7536312ef1d85d81d892c553707c2d23f2eea20de173280f5ec812e" exitCode=0 Jan 27 13:39:11 crc kubenswrapper[4786]: I0127 13:39:11.145243 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" event={"ID":"fac61271-09f6-4a27-bb20-edf0cb037d72","Type":"ContainerDied","Data":"78034341a7536312ef1d85d81d892c553707c2d23f2eea20de173280f5ec812e"} Jan 27 13:39:11 crc kubenswrapper[4786]: I0127 13:39:11.513681 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:39:11 crc kubenswrapper[4786]: I0127 13:39:11.626372 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:11 crc kubenswrapper[4786]: I0127 13:39:11.654682 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:11 crc kubenswrapper[4786]: I0127 13:39:11.654750 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:12 crc kubenswrapper[4786]: I0127 13:39:12.155128 4786 generic.go:334] "Generic (PLEG): container finished" podID="e4df6489-80cb-45c8-90b2-7fd2e9bca103" containerID="b149f864946bbb75733731eb4c8a2d8304720f30dcdbc16ab982d29d51d7f9c6" exitCode=0 Jan 27 13:39:12 crc kubenswrapper[4786]: I0127 13:39:12.155173 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" event={"ID":"e4df6489-80cb-45c8-90b2-7fd2e9bca103","Type":"ContainerDied","Data":"b149f864946bbb75733731eb4c8a2d8304720f30dcdbc16ab982d29d51d7f9c6"} Jan 27 13:39:12 crc kubenswrapper[4786]: I0127 13:39:12.507545 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" Jan 27 13:39:12 crc kubenswrapper[4786]: I0127 13:39:12.574365 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c8ft\" (UniqueName: \"kubernetes.io/projected/fac61271-09f6-4a27-bb20-edf0cb037d72-kube-api-access-9c8ft\") pod \"fac61271-09f6-4a27-bb20-edf0cb037d72\" (UID: \"fac61271-09f6-4a27-bb20-edf0cb037d72\") " Jan 27 13:39:12 crc kubenswrapper[4786]: I0127 13:39:12.574420 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac61271-09f6-4a27-bb20-edf0cb037d72-config-data\") pod \"fac61271-09f6-4a27-bb20-edf0cb037d72\" (UID: \"fac61271-09f6-4a27-bb20-edf0cb037d72\") " Jan 27 13:39:12 crc kubenswrapper[4786]: I0127 13:39:12.574449 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac61271-09f6-4a27-bb20-edf0cb037d72-scripts\") pod \"fac61271-09f6-4a27-bb20-edf0cb037d72\" (UID: \"fac61271-09f6-4a27-bb20-edf0cb037d72\") " Jan 27 13:39:12 crc kubenswrapper[4786]: I0127 13:39:12.579901 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac61271-09f6-4a27-bb20-edf0cb037d72-scripts" (OuterVolumeSpecName: "scripts") pod "fac61271-09f6-4a27-bb20-edf0cb037d72" (UID: "fac61271-09f6-4a27-bb20-edf0cb037d72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:39:12 crc kubenswrapper[4786]: I0127 13:39:12.580041 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac61271-09f6-4a27-bb20-edf0cb037d72-kube-api-access-9c8ft" (OuterVolumeSpecName: "kube-api-access-9c8ft") pod "fac61271-09f6-4a27-bb20-edf0cb037d72" (UID: "fac61271-09f6-4a27-bb20-edf0cb037d72"). InnerVolumeSpecName "kube-api-access-9c8ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:39:12 crc kubenswrapper[4786]: I0127 13:39:12.597207 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac61271-09f6-4a27-bb20-edf0cb037d72-config-data" (OuterVolumeSpecName: "config-data") pod "fac61271-09f6-4a27-bb20-edf0cb037d72" (UID: "fac61271-09f6-4a27-bb20-edf0cb037d72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:39:12 crc kubenswrapper[4786]: I0127 13:39:12.676680 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac61271-09f6-4a27-bb20-edf0cb037d72-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:12 crc kubenswrapper[4786]: I0127 13:39:12.676714 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c8ft\" (UniqueName: \"kubernetes.io/projected/fac61271-09f6-4a27-bb20-edf0cb037d72-kube-api-access-9c8ft\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:12 crc kubenswrapper[4786]: I0127 13:39:12.676724 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac61271-09f6-4a27-bb20-edf0cb037d72-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.165476 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" event={"ID":"fac61271-09f6-4a27-bb20-edf0cb037d72","Type":"ContainerDied","Data":"a9df56fd21aaf4145b0de4d30ec00f4fd302be88c8047ecd4a9aad589a29d319"} Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.165824 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9df56fd21aaf4145b0de4d30ec00f4fd302be88c8047ecd4a9aad589a29d319" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.165488 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.245255 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:39:13 crc kubenswrapper[4786]: E0127 13:39:13.245783 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac61271-09f6-4a27-bb20-edf0cb037d72" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.245857 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac61271-09f6-4a27-bb20-edf0cb037d72" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 27 13:39:13 crc kubenswrapper[4786]: E0127 13:39:13.245917 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.246028 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf31c53-34d5-4acc-b59e-096cfe798213" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.246233 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac61271-09f6-4a27-bb20-edf0cb037d72" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.246796 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.255322 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.256197 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.289053 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw748\" (UniqueName: \"kubernetes.io/projected/616fd9dd-c4dc-45a7-ab66-358fc07acea0-kube-api-access-vw748\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"616fd9dd-c4dc-45a7-ab66-358fc07acea0\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.289296 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/616fd9dd-c4dc-45a7-ab66-358fc07acea0-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"616fd9dd-c4dc-45a7-ab66-358fc07acea0\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.391335 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw748\" (UniqueName: \"kubernetes.io/projected/616fd9dd-c4dc-45a7-ab66-358fc07acea0-kube-api-access-vw748\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"616fd9dd-c4dc-45a7-ab66-358fc07acea0\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.391394 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/616fd9dd-c4dc-45a7-ab66-358fc07acea0-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"616fd9dd-c4dc-45a7-ab66-358fc07acea0\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.397294 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/616fd9dd-c4dc-45a7-ab66-358fc07acea0-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"616fd9dd-c4dc-45a7-ab66-358fc07acea0\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.409109 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw748\" (UniqueName: \"kubernetes.io/projected/616fd9dd-c4dc-45a7-ab66-358fc07acea0-kube-api-access-vw748\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"616fd9dd-c4dc-45a7-ab66-358fc07acea0\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.498211 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.563064 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.593542 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4df6489-80cb-45c8-90b2-7fd2e9bca103-scripts\") pod \"e4df6489-80cb-45c8-90b2-7fd2e9bca103\" (UID: \"e4df6489-80cb-45c8-90b2-7fd2e9bca103\") " Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.593811 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4df6489-80cb-45c8-90b2-7fd2e9bca103-config-data\") pod \"e4df6489-80cb-45c8-90b2-7fd2e9bca103\" (UID: \"e4df6489-80cb-45c8-90b2-7fd2e9bca103\") " Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.593856 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd6wl\" (UniqueName: \"kubernetes.io/projected/e4df6489-80cb-45c8-90b2-7fd2e9bca103-kube-api-access-wd6wl\") pod \"e4df6489-80cb-45c8-90b2-7fd2e9bca103\" (UID: \"e4df6489-80cb-45c8-90b2-7fd2e9bca103\") " Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.605349 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4df6489-80cb-45c8-90b2-7fd2e9bca103-scripts" (OuterVolumeSpecName: "scripts") pod "e4df6489-80cb-45c8-90b2-7fd2e9bca103" (UID: "e4df6489-80cb-45c8-90b2-7fd2e9bca103"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.605385 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4df6489-80cb-45c8-90b2-7fd2e9bca103-kube-api-access-wd6wl" (OuterVolumeSpecName: "kube-api-access-wd6wl") pod "e4df6489-80cb-45c8-90b2-7fd2e9bca103" (UID: "e4df6489-80cb-45c8-90b2-7fd2e9bca103"). InnerVolumeSpecName "kube-api-access-wd6wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.621587 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4df6489-80cb-45c8-90b2-7fd2e9bca103-config-data" (OuterVolumeSpecName: "config-data") pod "e4df6489-80cb-45c8-90b2-7fd2e9bca103" (UID: "e4df6489-80cb-45c8-90b2-7fd2e9bca103"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.696318 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4df6489-80cb-45c8-90b2-7fd2e9bca103-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.696351 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd6wl\" (UniqueName: \"kubernetes.io/projected/e4df6489-80cb-45c8-90b2-7fd2e9bca103-kube-api-access-wd6wl\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:13 crc kubenswrapper[4786]: I0127 13:39:13.696362 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4df6489-80cb-45c8-90b2-7fd2e9bca103-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.004626 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 27 13:39:14 crc kubenswrapper[4786]: W0127 13:39:14.008230 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod616fd9dd_c4dc_45a7_ab66_358fc07acea0.slice/crio-fd8a3388ac71bfdeea8d557b76b6d20c2cc9d6aa189520f4095f7354a9e145c8 WatchSource:0}: Error finding container fd8a3388ac71bfdeea8d557b76b6d20c2cc9d6aa189520f4095f7354a9e145c8: Status 404 returned error can't find the container with id fd8a3388ac71bfdeea8d557b76b6d20c2cc9d6aa189520f4095f7354a9e145c8 Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.173290 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"616fd9dd-c4dc-45a7-ab66-358fc07acea0","Type":"ContainerStarted","Data":"fe430e075dbcaf2a391dbb295729fff0dfbb4338226558acb3f3fd3a2a769ccb"} Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.173680 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"616fd9dd-c4dc-45a7-ab66-358fc07acea0","Type":"ContainerStarted","Data":"fd8a3388ac71bfdeea8d557b76b6d20c2cc9d6aa189520f4095f7354a9e145c8"} Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.173706 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.175074 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" event={"ID":"e4df6489-80cb-45c8-90b2-7fd2e9bca103","Type":"ContainerDied","Data":"2e45031dfd03670e832293ae9dde915f9edd4a6b1b084a4540b727c68c579e24"} Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.175101 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e45031dfd03670e832293ae9dde915f9edd4a6b1b084a4540b727c68c579e24" Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.175219 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7" Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.211137 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=1.211110552 podStartE2EDuration="1.211110552s" podCreationTimestamp="2026-01-27 13:39:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:39:14.207996927 +0000 UTC m=+1937.418611136" watchObservedRunningTime="2026-01-27 13:39:14.211110552 +0000 UTC m=+1937.421724681" Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.377654 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.378078 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="a5af74df-03d5-46e9-b12f-9b2b948381d2" containerName="nova-kuttl-api-log" containerID="cri-o://3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad" gracePeriod=30 Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.378299 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="a5af74df-03d5-46e9-b12f-9b2b948381d2" containerName="nova-kuttl-api-api" containerID="cri-o://1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8" gracePeriod=30 Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.411333 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.412176 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="5a6a703a-c54b-46ed-98d7-72d3e821bb7f" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://fbfe810256ade72717132925281639c18155483dd3240fbf420dce7ba97d2b82" gracePeriod=30 Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.422811 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.423068 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="08918475-de15-4f9a-96a3-450082e907e6" containerName="nova-kuttl-metadata-log" containerID="cri-o://14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144" gracePeriod=30 Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.423192 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="08918475-de15-4f9a-96a3-450082e907e6" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78" gracePeriod=30 Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.858741 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:14 crc kubenswrapper[4786]: I0127 13:39:14.871960 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.017048 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08918475-de15-4f9a-96a3-450082e907e6-config-data\") pod \"08918475-de15-4f9a-96a3-450082e907e6\" (UID: \"08918475-de15-4f9a-96a3-450082e907e6\") " Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.017519 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfsz7\" (UniqueName: \"kubernetes.io/projected/a5af74df-03d5-46e9-b12f-9b2b948381d2-kube-api-access-vfsz7\") pod \"a5af74df-03d5-46e9-b12f-9b2b948381d2\" (UID: \"a5af74df-03d5-46e9-b12f-9b2b948381d2\") " Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.017767 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08918475-de15-4f9a-96a3-450082e907e6-logs\") pod \"08918475-de15-4f9a-96a3-450082e907e6\" (UID: \"08918475-de15-4f9a-96a3-450082e907e6\") " Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.017932 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vzq2\" (UniqueName: \"kubernetes.io/projected/08918475-de15-4f9a-96a3-450082e907e6-kube-api-access-8vzq2\") pod \"08918475-de15-4f9a-96a3-450082e907e6\" (UID: \"08918475-de15-4f9a-96a3-450082e907e6\") " Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.018068 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5af74df-03d5-46e9-b12f-9b2b948381d2-logs\") pod \"a5af74df-03d5-46e9-b12f-9b2b948381d2\" (UID: \"a5af74df-03d5-46e9-b12f-9b2b948381d2\") " Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.018183 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08918475-de15-4f9a-96a3-450082e907e6-logs" (OuterVolumeSpecName: "logs") pod "08918475-de15-4f9a-96a3-450082e907e6" (UID: "08918475-de15-4f9a-96a3-450082e907e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.018371 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5af74df-03d5-46e9-b12f-9b2b948381d2-config-data\") pod \"a5af74df-03d5-46e9-b12f-9b2b948381d2\" (UID: \"a5af74df-03d5-46e9-b12f-9b2b948381d2\") " Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.018477 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5af74df-03d5-46e9-b12f-9b2b948381d2-logs" (OuterVolumeSpecName: "logs") pod "a5af74df-03d5-46e9-b12f-9b2b948381d2" (UID: "a5af74df-03d5-46e9-b12f-9b2b948381d2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.019306 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08918475-de15-4f9a-96a3-450082e907e6-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.019411 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5af74df-03d5-46e9-b12f-9b2b948381d2-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.024295 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08918475-de15-4f9a-96a3-450082e907e6-kube-api-access-8vzq2" (OuterVolumeSpecName: "kube-api-access-8vzq2") pod "08918475-de15-4f9a-96a3-450082e907e6" (UID: "08918475-de15-4f9a-96a3-450082e907e6"). InnerVolumeSpecName "kube-api-access-8vzq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.024483 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5af74df-03d5-46e9-b12f-9b2b948381d2-kube-api-access-vfsz7" (OuterVolumeSpecName: "kube-api-access-vfsz7") pod "a5af74df-03d5-46e9-b12f-9b2b948381d2" (UID: "a5af74df-03d5-46e9-b12f-9b2b948381d2"). InnerVolumeSpecName "kube-api-access-vfsz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.039658 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08918475-de15-4f9a-96a3-450082e907e6-config-data" (OuterVolumeSpecName: "config-data") pod "08918475-de15-4f9a-96a3-450082e907e6" (UID: "08918475-de15-4f9a-96a3-450082e907e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.039968 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5af74df-03d5-46e9-b12f-9b2b948381d2-config-data" (OuterVolumeSpecName: "config-data") pod "a5af74df-03d5-46e9-b12f-9b2b948381d2" (UID: "a5af74df-03d5-46e9-b12f-9b2b948381d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.120361 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5af74df-03d5-46e9-b12f-9b2b948381d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.120405 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08918475-de15-4f9a-96a3-450082e907e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.120415 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfsz7\" (UniqueName: \"kubernetes.io/projected/a5af74df-03d5-46e9-b12f-9b2b948381d2-kube-api-access-vfsz7\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.120426 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vzq2\" (UniqueName: \"kubernetes.io/projected/08918475-de15-4f9a-96a3-450082e907e6-kube-api-access-8vzq2\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.184462 4786 generic.go:334] "Generic (PLEG): container finished" podID="a5af74df-03d5-46e9-b12f-9b2b948381d2" containerID="1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8" exitCode=0 Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.184536 4786 generic.go:334] "Generic (PLEG): container finished" podID="a5af74df-03d5-46e9-b12f-9b2b948381d2" containerID="3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad" exitCode=143 Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.184650 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.184645 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a5af74df-03d5-46e9-b12f-9b2b948381d2","Type":"ContainerDied","Data":"1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8"} Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.184749 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a5af74df-03d5-46e9-b12f-9b2b948381d2","Type":"ContainerDied","Data":"3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad"} Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.184763 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"a5af74df-03d5-46e9-b12f-9b2b948381d2","Type":"ContainerDied","Data":"76cefee929b6dcfa0bd5e24973060530e6f408c4c37f1b907636405c2dc12b29"} Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.184781 4786 scope.go:117] "RemoveContainer" containerID="1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.186939 4786 generic.go:334] "Generic (PLEG): container finished" podID="08918475-de15-4f9a-96a3-450082e907e6" containerID="c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78" exitCode=0 Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.186961 4786 generic.go:334] "Generic (PLEG): container finished" podID="08918475-de15-4f9a-96a3-450082e907e6" containerID="14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144" exitCode=143 Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.187728 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08918475-de15-4f9a-96a3-450082e907e6","Type":"ContainerDied","Data":"c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78"} Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.187777 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08918475-de15-4f9a-96a3-450082e907e6","Type":"ContainerDied","Data":"14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144"} Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.187793 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08918475-de15-4f9a-96a3-450082e907e6","Type":"ContainerDied","Data":"d7a63ec2decf86f5bb2d79cf3cfb1601fe1d3c53d7c9cd9770580cb0ca3e8ade"} Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.187792 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.207778 4786 scope.go:117] "RemoveContainer" containerID="3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.224788 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.236013 4786 scope.go:117] "RemoveContainer" containerID="1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8" Jan 27 13:39:15 crc kubenswrapper[4786]: E0127 13:39:15.238551 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8\": container with ID starting with 1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8 not found: ID does not exist" containerID="1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.238620 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8"} err="failed to get container status \"1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8\": rpc error: code = NotFound desc = could not find container \"1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8\": container with ID starting with 1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8 not found: ID does not exist" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.238653 4786 scope.go:117] "RemoveContainer" containerID="3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad" Jan 27 13:39:15 crc kubenswrapper[4786]: E0127 13:39:15.238966 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad\": container with ID starting with 3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad not found: ID does not exist" containerID="3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.238992 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad"} err="failed to get container status \"3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad\": rpc error: code = NotFound desc = could not find container \"3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad\": container with ID starting with 3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad not found: ID does not exist" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.239010 4786 scope.go:117] "RemoveContainer" containerID="1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.239314 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8"} err="failed to get container status \"1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8\": rpc error: code = NotFound desc = could not find container \"1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8\": container with ID starting with 1908347159f07e616f7202a1b7d0bf85d4701a3c2493e544fe01df88f2c2c9d8 not found: ID does not exist" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.239339 4786 scope.go:117] "RemoveContainer" containerID="3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.239674 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad"} err="failed to get container status \"3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad\": rpc error: code = NotFound desc = could not find container \"3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad\": container with ID starting with 3c8a6affc26317e1c771a85129de709fce9f6e3024f2fda3a048ba40cb5f8bad not found: ID does not exist" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.239699 4786 scope.go:117] "RemoveContainer" containerID="c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.240306 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.247742 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.268636 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:39:15 crc kubenswrapper[4786]: E0127 13:39:15.269034 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5af74df-03d5-46e9-b12f-9b2b948381d2" containerName="nova-kuttl-api-log" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.269054 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5af74df-03d5-46e9-b12f-9b2b948381d2" containerName="nova-kuttl-api-log" Jan 27 13:39:15 crc kubenswrapper[4786]: E0127 13:39:15.269079 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4df6489-80cb-45c8-90b2-7fd2e9bca103" containerName="nova-manage" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.269088 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4df6489-80cb-45c8-90b2-7fd2e9bca103" containerName="nova-manage" Jan 27 13:39:15 crc kubenswrapper[4786]: E0127 13:39:15.269104 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08918475-de15-4f9a-96a3-450082e907e6" containerName="nova-kuttl-metadata-log" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.269111 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="08918475-de15-4f9a-96a3-450082e907e6" containerName="nova-kuttl-metadata-log" Jan 27 13:39:15 crc kubenswrapper[4786]: E0127 13:39:15.269138 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08918475-de15-4f9a-96a3-450082e907e6" containerName="nova-kuttl-metadata-metadata" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.269146 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="08918475-de15-4f9a-96a3-450082e907e6" containerName="nova-kuttl-metadata-metadata" Jan 27 13:39:15 crc kubenswrapper[4786]: E0127 13:39:15.269163 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5af74df-03d5-46e9-b12f-9b2b948381d2" containerName="nova-kuttl-api-api" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.269172 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5af74df-03d5-46e9-b12f-9b2b948381d2" containerName="nova-kuttl-api-api" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.269348 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="08918475-de15-4f9a-96a3-450082e907e6" containerName="nova-kuttl-metadata-log" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.269370 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5af74df-03d5-46e9-b12f-9b2b948381d2" containerName="nova-kuttl-api-api" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.269380 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4df6489-80cb-45c8-90b2-7fd2e9bca103" containerName="nova-manage" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.269392 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5af74df-03d5-46e9-b12f-9b2b948381d2" containerName="nova-kuttl-api-log" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.269403 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="08918475-de15-4f9a-96a3-450082e907e6" containerName="nova-kuttl-metadata-metadata" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.270382 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.272372 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.277353 4786 scope.go:117] "RemoveContainer" containerID="14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.282772 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.295258 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.308224 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.309555 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.311952 4786 scope.go:117] "RemoveContainer" containerID="c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78" Jan 27 13:39:15 crc kubenswrapper[4786]: E0127 13:39:15.312823 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78\": container with ID starting with c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78 not found: ID does not exist" containerID="c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.312862 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.312856 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78"} err="failed to get container status \"c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78\": rpc error: code = NotFound desc = could not find container \"c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78\": container with ID starting with c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78 not found: ID does not exist" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.312895 4786 scope.go:117] "RemoveContainer" containerID="14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144" Jan 27 13:39:15 crc kubenswrapper[4786]: E0127 13:39:15.313198 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144\": container with ID starting with 14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144 not found: ID does not exist" containerID="14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.313221 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144"} err="failed to get container status \"14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144\": rpc error: code = NotFound desc = could not find container \"14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144\": container with ID starting with 14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144 not found: ID does not exist" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.313234 4786 scope.go:117] "RemoveContainer" containerID="c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.314244 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78"} err="failed to get container status \"c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78\": rpc error: code = NotFound desc = could not find container \"c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78\": container with ID starting with c112c42d98ff69a07fb690128770638f9a1558fa5f3044133c32d6be56904c78 not found: ID does not exist" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.314264 4786 scope.go:117] "RemoveContainer" containerID="14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.314903 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144"} err="failed to get container status \"14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144\": rpc error: code = NotFound desc = could not find container \"14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144\": container with ID starting with 14668cf05098c7ce0b26675fdf8c7c730fec615bc930502bc12ab55b0bddd144 not found: ID does not exist" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.335371 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.426511 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgrfx\" (UniqueName: \"kubernetes.io/projected/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-kube-api-access-pgrfx\") pod \"nova-kuttl-api-0\" (UID: \"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.426568 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-logs\") pod \"nova-kuttl-api-0\" (UID: \"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.426686 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-config-data\") pod \"nova-kuttl-api-0\" (UID: \"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.426724 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4d08136-66cf-470b-8aa6-b3a59fa583b4-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"f4d08136-66cf-470b-8aa6-b3a59fa583b4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.426867 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wgww\" (UniqueName: \"kubernetes.io/projected/f4d08136-66cf-470b-8aa6-b3a59fa583b4-kube-api-access-2wgww\") pod \"nova-kuttl-metadata-0\" (UID: \"f4d08136-66cf-470b-8aa6-b3a59fa583b4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.426932 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d08136-66cf-470b-8aa6-b3a59fa583b4-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"f4d08136-66cf-470b-8aa6-b3a59fa583b4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.480366 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08918475-de15-4f9a-96a3-450082e907e6" path="/var/lib/kubelet/pods/08918475-de15-4f9a-96a3-450082e907e6/volumes" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.480982 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5af74df-03d5-46e9-b12f-9b2b948381d2" path="/var/lib/kubelet/pods/a5af74df-03d5-46e9-b12f-9b2b948381d2/volumes" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.528380 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgrfx\" (UniqueName: \"kubernetes.io/projected/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-kube-api-access-pgrfx\") pod \"nova-kuttl-api-0\" (UID: \"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.528434 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-logs\") pod \"nova-kuttl-api-0\" (UID: \"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.528484 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-config-data\") pod \"nova-kuttl-api-0\" (UID: \"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.528507 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4d08136-66cf-470b-8aa6-b3a59fa583b4-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"f4d08136-66cf-470b-8aa6-b3a59fa583b4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.528527 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wgww\" (UniqueName: \"kubernetes.io/projected/f4d08136-66cf-470b-8aa6-b3a59fa583b4-kube-api-access-2wgww\") pod \"nova-kuttl-metadata-0\" (UID: \"f4d08136-66cf-470b-8aa6-b3a59fa583b4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.528559 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d08136-66cf-470b-8aa6-b3a59fa583b4-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"f4d08136-66cf-470b-8aa6-b3a59fa583b4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.528932 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-logs\") pod \"nova-kuttl-api-0\" (UID: \"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.529178 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4d08136-66cf-470b-8aa6-b3a59fa583b4-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"f4d08136-66cf-470b-8aa6-b3a59fa583b4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.533143 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d08136-66cf-470b-8aa6-b3a59fa583b4-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"f4d08136-66cf-470b-8aa6-b3a59fa583b4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.543356 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-config-data\") pod \"nova-kuttl-api-0\" (UID: \"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.553568 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgrfx\" (UniqueName: \"kubernetes.io/projected/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-kube-api-access-pgrfx\") pod \"nova-kuttl-api-0\" (UID: \"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.554148 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wgww\" (UniqueName: \"kubernetes.io/projected/f4d08136-66cf-470b-8aa6-b3a59fa583b4-kube-api-access-2wgww\") pod \"nova-kuttl-metadata-0\" (UID: \"f4d08136-66cf-470b-8aa6-b3a59fa583b4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.603627 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:15 crc kubenswrapper[4786]: I0127 13:39:15.647049 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:16 crc kubenswrapper[4786]: I0127 13:39:16.088533 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:39:16 crc kubenswrapper[4786]: I0127 13:39:16.153562 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:39:16 crc kubenswrapper[4786]: I0127 13:39:16.199757 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd","Type":"ContainerStarted","Data":"940f94f72db880122bb605f2ba8a703f36dc93551d6a0c5211508de9a944adfa"} Jan 27 13:39:16 crc kubenswrapper[4786]: I0127 13:39:16.203562 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"f4d08136-66cf-470b-8aa6-b3a59fa583b4","Type":"ContainerStarted","Data":"ae047b96d7278e33d8bd4fbb7249ccada1837d397eefaec56131acc940489059"} Jan 27 13:39:16 crc kubenswrapper[4786]: I0127 13:39:16.513522 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:39:16 crc kubenswrapper[4786]: I0127 13:39:16.524468 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:39:17 crc kubenswrapper[4786]: I0127 13:39:17.212716 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"f4d08136-66cf-470b-8aa6-b3a59fa583b4","Type":"ContainerStarted","Data":"aa08e28101c20890ffbe66ed256b4655f3a25e69f9d9a01fcba874ef460e7422"} Jan 27 13:39:17 crc kubenswrapper[4786]: I0127 13:39:17.212783 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"f4d08136-66cf-470b-8aa6-b3a59fa583b4","Type":"ContainerStarted","Data":"847dc6a79d5c8aed1a61f0fb07b6e829b08233ac76528dac3c111773dae79a47"} Jan 27 13:39:17 crc kubenswrapper[4786]: I0127 13:39:17.214568 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd","Type":"ContainerStarted","Data":"01d104304eab3cf84e94b1d637b7b59c188abfb1fe4eada9f13d398bf0705fdd"} Jan 27 13:39:17 crc kubenswrapper[4786]: I0127 13:39:17.214628 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd","Type":"ContainerStarted","Data":"c46efc3c37a8630916a31c8b37fd9d90445ab2396863fc33675c2d7e14da3cdd"} Jan 27 13:39:17 crc kubenswrapper[4786]: I0127 13:39:17.222998 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 27 13:39:17 crc kubenswrapper[4786]: I0127 13:39:17.229598 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.229562666 podStartE2EDuration="2.229562666s" podCreationTimestamp="2026-01-27 13:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:39:17.228420354 +0000 UTC m=+1940.439034493" watchObservedRunningTime="2026-01-27 13:39:17.229562666 +0000 UTC m=+1940.440176785" Jan 27 13:39:17 crc kubenswrapper[4786]: I0127 13:39:17.254780 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.2547585359999998 podStartE2EDuration="2.254758536s" podCreationTimestamp="2026-01-27 13:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:39:17.242370986 +0000 UTC m=+1940.452985125" watchObservedRunningTime="2026-01-27 13:39:17.254758536 +0000 UTC m=+1940.465372655" Jan 27 13:39:18 crc kubenswrapper[4786]: I0127 13:39:18.963744 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.083149 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6a703a-c54b-46ed-98d7-72d3e821bb7f-config-data\") pod \"5a6a703a-c54b-46ed-98d7-72d3e821bb7f\" (UID: \"5a6a703a-c54b-46ed-98d7-72d3e821bb7f\") " Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.083197 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tgbg\" (UniqueName: \"kubernetes.io/projected/5a6a703a-c54b-46ed-98d7-72d3e821bb7f-kube-api-access-5tgbg\") pod \"5a6a703a-c54b-46ed-98d7-72d3e821bb7f\" (UID: \"5a6a703a-c54b-46ed-98d7-72d3e821bb7f\") " Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.089699 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a6a703a-c54b-46ed-98d7-72d3e821bb7f-kube-api-access-5tgbg" (OuterVolumeSpecName: "kube-api-access-5tgbg") pod "5a6a703a-c54b-46ed-98d7-72d3e821bb7f" (UID: "5a6a703a-c54b-46ed-98d7-72d3e821bb7f"). InnerVolumeSpecName "kube-api-access-5tgbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.108885 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a6a703a-c54b-46ed-98d7-72d3e821bb7f-config-data" (OuterVolumeSpecName: "config-data") pod "5a6a703a-c54b-46ed-98d7-72d3e821bb7f" (UID: "5a6a703a-c54b-46ed-98d7-72d3e821bb7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.184979 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6a703a-c54b-46ed-98d7-72d3e821bb7f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.185040 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tgbg\" (UniqueName: \"kubernetes.io/projected/5a6a703a-c54b-46ed-98d7-72d3e821bb7f-kube-api-access-5tgbg\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.234434 4786 generic.go:334] "Generic (PLEG): container finished" podID="5a6a703a-c54b-46ed-98d7-72d3e821bb7f" containerID="fbfe810256ade72717132925281639c18155483dd3240fbf420dce7ba97d2b82" exitCode=0 Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.234473 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"5a6a703a-c54b-46ed-98d7-72d3e821bb7f","Type":"ContainerDied","Data":"fbfe810256ade72717132925281639c18155483dd3240fbf420dce7ba97d2b82"} Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.234492 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.234509 4786 scope.go:117] "RemoveContainer" containerID="fbfe810256ade72717132925281639c18155483dd3240fbf420dce7ba97d2b82" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.234498 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"5a6a703a-c54b-46ed-98d7-72d3e821bb7f","Type":"ContainerDied","Data":"9b825d2b8032f141ca4a82cb867005df9c63756b31a344173550f6b49650be72"} Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.265689 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.273050 4786 scope.go:117] "RemoveContainer" containerID="fbfe810256ade72717132925281639c18155483dd3240fbf420dce7ba97d2b82" Jan 27 13:39:19 crc kubenswrapper[4786]: E0127 13:39:19.273667 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbfe810256ade72717132925281639c18155483dd3240fbf420dce7ba97d2b82\": container with ID starting with fbfe810256ade72717132925281639c18155483dd3240fbf420dce7ba97d2b82 not found: ID does not exist" containerID="fbfe810256ade72717132925281639c18155483dd3240fbf420dce7ba97d2b82" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.273707 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbfe810256ade72717132925281639c18155483dd3240fbf420dce7ba97d2b82"} err="failed to get container status \"fbfe810256ade72717132925281639c18155483dd3240fbf420dce7ba97d2b82\": rpc error: code = NotFound desc = could not find container \"fbfe810256ade72717132925281639c18155483dd3240fbf420dce7ba97d2b82\": container with ID starting with fbfe810256ade72717132925281639c18155483dd3240fbf420dce7ba97d2b82 not found: ID does not exist" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.274773 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.289119 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:39:19 crc kubenswrapper[4786]: E0127 13:39:19.289450 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a6a703a-c54b-46ed-98d7-72d3e821bb7f" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.289467 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a6a703a-c54b-46ed-98d7-72d3e821bb7f" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.289652 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a6a703a-c54b-46ed-98d7-72d3e821bb7f" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.290373 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.297471 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.302506 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.389185 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq8hn\" (UniqueName: \"kubernetes.io/projected/c02c57c0-1b08-4ba4-9784-ed93d641fd86-kube-api-access-lq8hn\") pod \"nova-kuttl-scheduler-0\" (UID: \"c02c57c0-1b08-4ba4-9784-ed93d641fd86\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.389505 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02c57c0-1b08-4ba4-9784-ed93d641fd86-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"c02c57c0-1b08-4ba4-9784-ed93d641fd86\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.473481 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a6a703a-c54b-46ed-98d7-72d3e821bb7f" path="/var/lib/kubelet/pods/5a6a703a-c54b-46ed-98d7-72d3e821bb7f/volumes" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.491716 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq8hn\" (UniqueName: \"kubernetes.io/projected/c02c57c0-1b08-4ba4-9784-ed93d641fd86-kube-api-access-lq8hn\") pod \"nova-kuttl-scheduler-0\" (UID: \"c02c57c0-1b08-4ba4-9784-ed93d641fd86\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.491772 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02c57c0-1b08-4ba4-9784-ed93d641fd86-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"c02c57c0-1b08-4ba4-9784-ed93d641fd86\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.504693 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02c57c0-1b08-4ba4-9784-ed93d641fd86-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"c02c57c0-1b08-4ba4-9784-ed93d641fd86\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.509566 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq8hn\" (UniqueName: \"kubernetes.io/projected/c02c57c0-1b08-4ba4-9784-ed93d641fd86-kube-api-access-lq8hn\") pod \"nova-kuttl-scheduler-0\" (UID: \"c02c57c0-1b08-4ba4-9784-ed93d641fd86\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:19 crc kubenswrapper[4786]: I0127 13:39:19.619826 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:20 crc kubenswrapper[4786]: I0127 13:39:20.059468 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:39:20 crc kubenswrapper[4786]: I0127 13:39:20.246730 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"c02c57c0-1b08-4ba4-9784-ed93d641fd86","Type":"ContainerStarted","Data":"30ca3652a5aa555228a080670d807333ae7fda6b3c6af8b053f62163b25d1e13"} Jan 27 13:39:20 crc kubenswrapper[4786]: I0127 13:39:20.246769 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"c02c57c0-1b08-4ba4-9784-ed93d641fd86","Type":"ContainerStarted","Data":"fed1fd49d7d0fdb3c1caecee9af072336e3ac3808896d3bb49d48439a8866d0c"} Jan 27 13:39:20 crc kubenswrapper[4786]: I0127 13:39:20.267135 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.267118293 podStartE2EDuration="1.267118293s" podCreationTimestamp="2026-01-27 13:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:39:20.259628668 +0000 UTC m=+1943.470242807" watchObservedRunningTime="2026-01-27 13:39:20.267118293 +0000 UTC m=+1943.477732412" Jan 27 13:39:20 crc kubenswrapper[4786]: I0127 13:39:20.605180 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:20 crc kubenswrapper[4786]: I0127 13:39:20.605757 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:23 crc kubenswrapper[4786]: I0127 13:39:23.589330 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.009555 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9"] Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.010648 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.013264 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.014433 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.021585 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9"] Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.162493 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/198b59b6-ce67-44fa-bf96-4c080e830106-scripts\") pod \"nova-kuttl-cell1-cell-mapping-4snp9\" (UID: \"198b59b6-ce67-44fa-bf96-4c080e830106\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.162553 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khrrs\" (UniqueName: \"kubernetes.io/projected/198b59b6-ce67-44fa-bf96-4c080e830106-kube-api-access-khrrs\") pod \"nova-kuttl-cell1-cell-mapping-4snp9\" (UID: \"198b59b6-ce67-44fa-bf96-4c080e830106\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.162756 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/198b59b6-ce67-44fa-bf96-4c080e830106-config-data\") pod \"nova-kuttl-cell1-cell-mapping-4snp9\" (UID: \"198b59b6-ce67-44fa-bf96-4c080e830106\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.264653 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/198b59b6-ce67-44fa-bf96-4c080e830106-scripts\") pod \"nova-kuttl-cell1-cell-mapping-4snp9\" (UID: \"198b59b6-ce67-44fa-bf96-4c080e830106\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.264722 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khrrs\" (UniqueName: \"kubernetes.io/projected/198b59b6-ce67-44fa-bf96-4c080e830106-kube-api-access-khrrs\") pod \"nova-kuttl-cell1-cell-mapping-4snp9\" (UID: \"198b59b6-ce67-44fa-bf96-4c080e830106\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.264769 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/198b59b6-ce67-44fa-bf96-4c080e830106-config-data\") pod \"nova-kuttl-cell1-cell-mapping-4snp9\" (UID: \"198b59b6-ce67-44fa-bf96-4c080e830106\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.270340 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/198b59b6-ce67-44fa-bf96-4c080e830106-scripts\") pod \"nova-kuttl-cell1-cell-mapping-4snp9\" (UID: \"198b59b6-ce67-44fa-bf96-4c080e830106\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.277698 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/198b59b6-ce67-44fa-bf96-4c080e830106-config-data\") pod \"nova-kuttl-cell1-cell-mapping-4snp9\" (UID: \"198b59b6-ce67-44fa-bf96-4c080e830106\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.291596 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khrrs\" (UniqueName: \"kubernetes.io/projected/198b59b6-ce67-44fa-bf96-4c080e830106-kube-api-access-khrrs\") pod \"nova-kuttl-cell1-cell-mapping-4snp9\" (UID: \"198b59b6-ce67-44fa-bf96-4c080e830106\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.329224 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.620040 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:24 crc kubenswrapper[4786]: I0127 13:39:24.800003 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9"] Jan 27 13:39:24 crc kubenswrapper[4786]: W0127 13:39:24.803753 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod198b59b6_ce67_44fa_bf96_4c080e830106.slice/crio-e7bb9c966cc5cc9275e50a67319ddaafb1bbcb19e3bec1af864ea2871e8e6e86 WatchSource:0}: Error finding container e7bb9c966cc5cc9275e50a67319ddaafb1bbcb19e3bec1af864ea2871e8e6e86: Status 404 returned error can't find the container with id e7bb9c966cc5cc9275e50a67319ddaafb1bbcb19e3bec1af864ea2871e8e6e86 Jan 27 13:39:25 crc kubenswrapper[4786]: I0127 13:39:25.300739 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" event={"ID":"198b59b6-ce67-44fa-bf96-4c080e830106","Type":"ContainerStarted","Data":"18977355d27f18d461cd7c746c65146ba54d727a09e9aa866c1b892de3a9b3d8"} Jan 27 13:39:25 crc kubenswrapper[4786]: I0127 13:39:25.300795 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" event={"ID":"198b59b6-ce67-44fa-bf96-4c080e830106","Type":"ContainerStarted","Data":"e7bb9c966cc5cc9275e50a67319ddaafb1bbcb19e3bec1af864ea2871e8e6e86"} Jan 27 13:39:25 crc kubenswrapper[4786]: I0127 13:39:25.316475 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" podStartSLOduration=2.316429871 podStartE2EDuration="2.316429871s" podCreationTimestamp="2026-01-27 13:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:39:25.314999093 +0000 UTC m=+1948.525613212" watchObservedRunningTime="2026-01-27 13:39:25.316429871 +0000 UTC m=+1948.527043990" Jan 27 13:39:25 crc kubenswrapper[4786]: I0127 13:39:25.604641 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:25 crc kubenswrapper[4786]: I0127 13:39:25.605401 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:25 crc kubenswrapper[4786]: I0127 13:39:25.667939 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:25 crc kubenswrapper[4786]: I0127 13:39:25.667992 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:26 crc kubenswrapper[4786]: I0127 13:39:26.686801 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="f4d08136-66cf-470b-8aa6-b3a59fa583b4" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.226:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:39:26 crc kubenswrapper[4786]: I0127 13:39:26.686830 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="f4d08136-66cf-470b-8aa6-b3a59fa583b4" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.226:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:39:26 crc kubenswrapper[4786]: I0127 13:39:26.768940 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.227:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:39:26 crc kubenswrapper[4786]: I0127 13:39:26.768989 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.227:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:39:29 crc kubenswrapper[4786]: I0127 13:39:29.620410 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:29 crc kubenswrapper[4786]: I0127 13:39:29.646839 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:30 crc kubenswrapper[4786]: I0127 13:39:30.350578 4786 generic.go:334] "Generic (PLEG): container finished" podID="198b59b6-ce67-44fa-bf96-4c080e830106" containerID="18977355d27f18d461cd7c746c65146ba54d727a09e9aa866c1b892de3a9b3d8" exitCode=0 Jan 27 13:39:30 crc kubenswrapper[4786]: I0127 13:39:30.350999 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" event={"ID":"198b59b6-ce67-44fa-bf96-4c080e830106","Type":"ContainerDied","Data":"18977355d27f18d461cd7c746c65146ba54d727a09e9aa866c1b892de3a9b3d8"} Jan 27 13:39:30 crc kubenswrapper[4786]: I0127 13:39:30.383674 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:31 crc kubenswrapper[4786]: I0127 13:39:31.706586 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" Jan 27 13:39:31 crc kubenswrapper[4786]: I0127 13:39:31.805905 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/198b59b6-ce67-44fa-bf96-4c080e830106-scripts\") pod \"198b59b6-ce67-44fa-bf96-4c080e830106\" (UID: \"198b59b6-ce67-44fa-bf96-4c080e830106\") " Jan 27 13:39:31 crc kubenswrapper[4786]: I0127 13:39:31.806013 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/198b59b6-ce67-44fa-bf96-4c080e830106-config-data\") pod \"198b59b6-ce67-44fa-bf96-4c080e830106\" (UID: \"198b59b6-ce67-44fa-bf96-4c080e830106\") " Jan 27 13:39:31 crc kubenswrapper[4786]: I0127 13:39:31.806044 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khrrs\" (UniqueName: \"kubernetes.io/projected/198b59b6-ce67-44fa-bf96-4c080e830106-kube-api-access-khrrs\") pod \"198b59b6-ce67-44fa-bf96-4c080e830106\" (UID: \"198b59b6-ce67-44fa-bf96-4c080e830106\") " Jan 27 13:39:31 crc kubenswrapper[4786]: I0127 13:39:31.811691 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/198b59b6-ce67-44fa-bf96-4c080e830106-kube-api-access-khrrs" (OuterVolumeSpecName: "kube-api-access-khrrs") pod "198b59b6-ce67-44fa-bf96-4c080e830106" (UID: "198b59b6-ce67-44fa-bf96-4c080e830106"). InnerVolumeSpecName "kube-api-access-khrrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:39:31 crc kubenswrapper[4786]: I0127 13:39:31.811744 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/198b59b6-ce67-44fa-bf96-4c080e830106-scripts" (OuterVolumeSpecName: "scripts") pod "198b59b6-ce67-44fa-bf96-4c080e830106" (UID: "198b59b6-ce67-44fa-bf96-4c080e830106"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:39:31 crc kubenswrapper[4786]: I0127 13:39:31.829781 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/198b59b6-ce67-44fa-bf96-4c080e830106-config-data" (OuterVolumeSpecName: "config-data") pod "198b59b6-ce67-44fa-bf96-4c080e830106" (UID: "198b59b6-ce67-44fa-bf96-4c080e830106"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:39:31 crc kubenswrapper[4786]: I0127 13:39:31.907582 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/198b59b6-ce67-44fa-bf96-4c080e830106-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:31 crc kubenswrapper[4786]: I0127 13:39:31.907675 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/198b59b6-ce67-44fa-bf96-4c080e830106-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:31 crc kubenswrapper[4786]: I0127 13:39:31.907687 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khrrs\" (UniqueName: \"kubernetes.io/projected/198b59b6-ce67-44fa-bf96-4c080e830106-kube-api-access-khrrs\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:32 crc kubenswrapper[4786]: I0127 13:39:32.369271 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" event={"ID":"198b59b6-ce67-44fa-bf96-4c080e830106","Type":"ContainerDied","Data":"e7bb9c966cc5cc9275e50a67319ddaafb1bbcb19e3bec1af864ea2871e8e6e86"} Jan 27 13:39:32 crc kubenswrapper[4786]: I0127 13:39:32.369669 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7bb9c966cc5cc9275e50a67319ddaafb1bbcb19e3bec1af864ea2871e8e6e86" Jan 27 13:39:32 crc kubenswrapper[4786]: I0127 13:39:32.369319 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9" Jan 27 13:39:32 crc kubenswrapper[4786]: I0127 13:39:32.559976 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:39:32 crc kubenswrapper[4786]: I0127 13:39:32.560431 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd" containerName="nova-kuttl-api-log" containerID="cri-o://c46efc3c37a8630916a31c8b37fd9d90445ab2396863fc33675c2d7e14da3cdd" gracePeriod=30 Jan 27 13:39:32 crc kubenswrapper[4786]: I0127 13:39:32.560490 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd" containerName="nova-kuttl-api-api" containerID="cri-o://01d104304eab3cf84e94b1d637b7b59c188abfb1fe4eada9f13d398bf0705fdd" gracePeriod=30 Jan 27 13:39:32 crc kubenswrapper[4786]: I0127 13:39:32.573872 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:39:32 crc kubenswrapper[4786]: I0127 13:39:32.574198 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="c02c57c0-1b08-4ba4-9784-ed93d641fd86" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://30ca3652a5aa555228a080670d807333ae7fda6b3c6af8b053f62163b25d1e13" gracePeriod=30 Jan 27 13:39:32 crc kubenswrapper[4786]: I0127 13:39:32.593897 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:39:32 crc kubenswrapper[4786]: I0127 13:39:32.594358 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="f4d08136-66cf-470b-8aa6-b3a59fa583b4" containerName="nova-kuttl-metadata-log" containerID="cri-o://847dc6a79d5c8aed1a61f0fb07b6e829b08233ac76528dac3c111773dae79a47" gracePeriod=30 Jan 27 13:39:32 crc kubenswrapper[4786]: I0127 13:39:32.594499 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="f4d08136-66cf-470b-8aa6-b3a59fa583b4" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://aa08e28101c20890ffbe66ed256b4655f3a25e69f9d9a01fcba874ef460e7422" gracePeriod=30 Jan 27 13:39:33 crc kubenswrapper[4786]: I0127 13:39:33.378990 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4d08136-66cf-470b-8aa6-b3a59fa583b4" containerID="847dc6a79d5c8aed1a61f0fb07b6e829b08233ac76528dac3c111773dae79a47" exitCode=143 Jan 27 13:39:33 crc kubenswrapper[4786]: I0127 13:39:33.379053 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"f4d08136-66cf-470b-8aa6-b3a59fa583b4","Type":"ContainerDied","Data":"847dc6a79d5c8aed1a61f0fb07b6e829b08233ac76528dac3c111773dae79a47"} Jan 27 13:39:33 crc kubenswrapper[4786]: I0127 13:39:33.381041 4786 generic.go:334] "Generic (PLEG): container finished" podID="1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd" containerID="c46efc3c37a8630916a31c8b37fd9d90445ab2396863fc33675c2d7e14da3cdd" exitCode=143 Jan 27 13:39:33 crc kubenswrapper[4786]: I0127 13:39:33.381071 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd","Type":"ContainerDied","Data":"c46efc3c37a8630916a31c8b37fd9d90445ab2396863fc33675c2d7e14da3cdd"} Jan 27 13:39:34 crc kubenswrapper[4786]: E0127 13:39:34.622407 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30ca3652a5aa555228a080670d807333ae7fda6b3c6af8b053f62163b25d1e13" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:39:34 crc kubenswrapper[4786]: E0127 13:39:34.624102 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30ca3652a5aa555228a080670d807333ae7fda6b3c6af8b053f62163b25d1e13" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:39:34 crc kubenswrapper[4786]: E0127 13:39:34.625986 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30ca3652a5aa555228a080670d807333ae7fda6b3c6af8b053f62163b25d1e13" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 13:39:34 crc kubenswrapper[4786]: E0127 13:39:34.626277 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="c02c57c0-1b08-4ba4-9784-ed93d641fd86" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.225212 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.236919 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.281142 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wgww\" (UniqueName: \"kubernetes.io/projected/f4d08136-66cf-470b-8aa6-b3a59fa583b4-kube-api-access-2wgww\") pod \"f4d08136-66cf-470b-8aa6-b3a59fa583b4\" (UID: \"f4d08136-66cf-470b-8aa6-b3a59fa583b4\") " Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.281216 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-config-data\") pod \"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd\" (UID: \"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd\") " Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.281251 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4d08136-66cf-470b-8aa6-b3a59fa583b4-logs\") pod \"f4d08136-66cf-470b-8aa6-b3a59fa583b4\" (UID: \"f4d08136-66cf-470b-8aa6-b3a59fa583b4\") " Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.281301 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgrfx\" (UniqueName: \"kubernetes.io/projected/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-kube-api-access-pgrfx\") pod \"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd\" (UID: \"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd\") " Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.281335 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-logs\") pod \"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd\" (UID: \"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd\") " Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.281507 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d08136-66cf-470b-8aa6-b3a59fa583b4-config-data\") pod \"f4d08136-66cf-470b-8aa6-b3a59fa583b4\" (UID: \"f4d08136-66cf-470b-8aa6-b3a59fa583b4\") " Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.282232 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-logs" (OuterVolumeSpecName: "logs") pod "1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd" (UID: "1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.282242 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4d08136-66cf-470b-8aa6-b3a59fa583b4-logs" (OuterVolumeSpecName: "logs") pod "f4d08136-66cf-470b-8aa6-b3a59fa583b4" (UID: "f4d08136-66cf-470b-8aa6-b3a59fa583b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.287264 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-kube-api-access-pgrfx" (OuterVolumeSpecName: "kube-api-access-pgrfx") pod "1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd" (UID: "1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd"). InnerVolumeSpecName "kube-api-access-pgrfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.292100 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d08136-66cf-470b-8aa6-b3a59fa583b4-kube-api-access-2wgww" (OuterVolumeSpecName: "kube-api-access-2wgww") pod "f4d08136-66cf-470b-8aa6-b3a59fa583b4" (UID: "f4d08136-66cf-470b-8aa6-b3a59fa583b4"). InnerVolumeSpecName "kube-api-access-2wgww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.303807 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-config-data" (OuterVolumeSpecName: "config-data") pod "1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd" (UID: "1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.310396 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4d08136-66cf-470b-8aa6-b3a59fa583b4-config-data" (OuterVolumeSpecName: "config-data") pod "f4d08136-66cf-470b-8aa6-b3a59fa583b4" (UID: "f4d08136-66cf-470b-8aa6-b3a59fa583b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.383258 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d08136-66cf-470b-8aa6-b3a59fa583b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.383300 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wgww\" (UniqueName: \"kubernetes.io/projected/f4d08136-66cf-470b-8aa6-b3a59fa583b4-kube-api-access-2wgww\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.383349 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.383362 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4d08136-66cf-470b-8aa6-b3a59fa583b4-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.383371 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgrfx\" (UniqueName: \"kubernetes.io/projected/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-kube-api-access-pgrfx\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.383378 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd-logs\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.409456 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4d08136-66cf-470b-8aa6-b3a59fa583b4" containerID="aa08e28101c20890ffbe66ed256b4655f3a25e69f9d9a01fcba874ef460e7422" exitCode=0 Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.409490 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"f4d08136-66cf-470b-8aa6-b3a59fa583b4","Type":"ContainerDied","Data":"aa08e28101c20890ffbe66ed256b4655f3a25e69f9d9a01fcba874ef460e7422"} Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.409529 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.409552 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"f4d08136-66cf-470b-8aa6-b3a59fa583b4","Type":"ContainerDied","Data":"ae047b96d7278e33d8bd4fbb7249ccada1837d397eefaec56131acc940489059"} Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.409572 4786 scope.go:117] "RemoveContainer" containerID="aa08e28101c20890ffbe66ed256b4655f3a25e69f9d9a01fcba874ef460e7422" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.413300 4786 generic.go:334] "Generic (PLEG): container finished" podID="1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd" containerID="01d104304eab3cf84e94b1d637b7b59c188abfb1fe4eada9f13d398bf0705fdd" exitCode=0 Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.413357 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd","Type":"ContainerDied","Data":"01d104304eab3cf84e94b1d637b7b59c188abfb1fe4eada9f13d398bf0705fdd"} Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.413410 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.413418 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd","Type":"ContainerDied","Data":"940f94f72db880122bb605f2ba8a703f36dc93551d6a0c5211508de9a944adfa"} Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.441766 4786 scope.go:117] "RemoveContainer" containerID="847dc6a79d5c8aed1a61f0fb07b6e829b08233ac76528dac3c111773dae79a47" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.453305 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.468775 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.479177 4786 scope.go:117] "RemoveContainer" containerID="aa08e28101c20890ffbe66ed256b4655f3a25e69f9d9a01fcba874ef460e7422" Jan 27 13:39:36 crc kubenswrapper[4786]: E0127 13:39:36.479796 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa08e28101c20890ffbe66ed256b4655f3a25e69f9d9a01fcba874ef460e7422\": container with ID starting with aa08e28101c20890ffbe66ed256b4655f3a25e69f9d9a01fcba874ef460e7422 not found: ID does not exist" containerID="aa08e28101c20890ffbe66ed256b4655f3a25e69f9d9a01fcba874ef460e7422" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.479945 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa08e28101c20890ffbe66ed256b4655f3a25e69f9d9a01fcba874ef460e7422"} err="failed to get container status \"aa08e28101c20890ffbe66ed256b4655f3a25e69f9d9a01fcba874ef460e7422\": rpc error: code = NotFound desc = could not find container \"aa08e28101c20890ffbe66ed256b4655f3a25e69f9d9a01fcba874ef460e7422\": container with ID starting with aa08e28101c20890ffbe66ed256b4655f3a25e69f9d9a01fcba874ef460e7422 not found: ID does not exist" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.480064 4786 scope.go:117] "RemoveContainer" containerID="847dc6a79d5c8aed1a61f0fb07b6e829b08233ac76528dac3c111773dae79a47" Jan 27 13:39:36 crc kubenswrapper[4786]: E0127 13:39:36.481595 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"847dc6a79d5c8aed1a61f0fb07b6e829b08233ac76528dac3c111773dae79a47\": container with ID starting with 847dc6a79d5c8aed1a61f0fb07b6e829b08233ac76528dac3c111773dae79a47 not found: ID does not exist" containerID="847dc6a79d5c8aed1a61f0fb07b6e829b08233ac76528dac3c111773dae79a47" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.481639 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"847dc6a79d5c8aed1a61f0fb07b6e829b08233ac76528dac3c111773dae79a47"} err="failed to get container status \"847dc6a79d5c8aed1a61f0fb07b6e829b08233ac76528dac3c111773dae79a47\": rpc error: code = NotFound desc = could not find container \"847dc6a79d5c8aed1a61f0fb07b6e829b08233ac76528dac3c111773dae79a47\": container with ID starting with 847dc6a79d5c8aed1a61f0fb07b6e829b08233ac76528dac3c111773dae79a47 not found: ID does not exist" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.481661 4786 scope.go:117] "RemoveContainer" containerID="01d104304eab3cf84e94b1d637b7b59c188abfb1fe4eada9f13d398bf0705fdd" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.484853 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.494459 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.504637 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:39:36 crc kubenswrapper[4786]: E0127 13:39:36.505062 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd" containerName="nova-kuttl-api-api" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.505086 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd" containerName="nova-kuttl-api-api" Jan 27 13:39:36 crc kubenswrapper[4786]: E0127 13:39:36.505107 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd" containerName="nova-kuttl-api-log" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.505115 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd" containerName="nova-kuttl-api-log" Jan 27 13:39:36 crc kubenswrapper[4786]: E0127 13:39:36.505130 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="198b59b6-ce67-44fa-bf96-4c080e830106" containerName="nova-manage" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.505138 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="198b59b6-ce67-44fa-bf96-4c080e830106" containerName="nova-manage" Jan 27 13:39:36 crc kubenswrapper[4786]: E0127 13:39:36.505161 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d08136-66cf-470b-8aa6-b3a59fa583b4" containerName="nova-kuttl-metadata-log" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.505169 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d08136-66cf-470b-8aa6-b3a59fa583b4" containerName="nova-kuttl-metadata-log" Jan 27 13:39:36 crc kubenswrapper[4786]: E0127 13:39:36.505185 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d08136-66cf-470b-8aa6-b3a59fa583b4" containerName="nova-kuttl-metadata-metadata" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.505192 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d08136-66cf-470b-8aa6-b3a59fa583b4" containerName="nova-kuttl-metadata-metadata" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.505400 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d08136-66cf-470b-8aa6-b3a59fa583b4" containerName="nova-kuttl-metadata-log" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.505424 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd" containerName="nova-kuttl-api-api" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.505436 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="198b59b6-ce67-44fa-bf96-4c080e830106" containerName="nova-manage" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.505453 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd" containerName="nova-kuttl-api-log" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.505463 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d08136-66cf-470b-8aa6-b3a59fa583b4" containerName="nova-kuttl-metadata-metadata" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.506541 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.508763 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.512537 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.514224 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.517310 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.523665 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.532106 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.561440 4786 scope.go:117] "RemoveContainer" containerID="c46efc3c37a8630916a31c8b37fd9d90445ab2396863fc33675c2d7e14da3cdd" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.583491 4786 scope.go:117] "RemoveContainer" containerID="01d104304eab3cf84e94b1d637b7b59c188abfb1fe4eada9f13d398bf0705fdd" Jan 27 13:39:36 crc kubenswrapper[4786]: E0127 13:39:36.584010 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d104304eab3cf84e94b1d637b7b59c188abfb1fe4eada9f13d398bf0705fdd\": container with ID starting with 01d104304eab3cf84e94b1d637b7b59c188abfb1fe4eada9f13d398bf0705fdd not found: ID does not exist" containerID="01d104304eab3cf84e94b1d637b7b59c188abfb1fe4eada9f13d398bf0705fdd" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.584107 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d104304eab3cf84e94b1d637b7b59c188abfb1fe4eada9f13d398bf0705fdd"} err="failed to get container status \"01d104304eab3cf84e94b1d637b7b59c188abfb1fe4eada9f13d398bf0705fdd\": rpc error: code = NotFound desc = could not find container \"01d104304eab3cf84e94b1d637b7b59c188abfb1fe4eada9f13d398bf0705fdd\": container with ID starting with 01d104304eab3cf84e94b1d637b7b59c188abfb1fe4eada9f13d398bf0705fdd not found: ID does not exist" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.584181 4786 scope.go:117] "RemoveContainer" containerID="c46efc3c37a8630916a31c8b37fd9d90445ab2396863fc33675c2d7e14da3cdd" Jan 27 13:39:36 crc kubenswrapper[4786]: E0127 13:39:36.584555 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c46efc3c37a8630916a31c8b37fd9d90445ab2396863fc33675c2d7e14da3cdd\": container with ID starting with c46efc3c37a8630916a31c8b37fd9d90445ab2396863fc33675c2d7e14da3cdd not found: ID does not exist" containerID="c46efc3c37a8630916a31c8b37fd9d90445ab2396863fc33675c2d7e14da3cdd" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.584593 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c46efc3c37a8630916a31c8b37fd9d90445ab2396863fc33675c2d7e14da3cdd"} err="failed to get container status \"c46efc3c37a8630916a31c8b37fd9d90445ab2396863fc33675c2d7e14da3cdd\": rpc error: code = NotFound desc = could not find container \"c46efc3c37a8630916a31c8b37fd9d90445ab2396863fc33675c2d7e14da3cdd\": container with ID starting with c46efc3c37a8630916a31c8b37fd9d90445ab2396863fc33675c2d7e14da3cdd not found: ID does not exist" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.587994 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e0ce90-fda8-4074-b753-0df1531d7fcc-logs\") pod \"nova-kuttl-api-0\" (UID: \"c7e0ce90-fda8-4074-b753-0df1531d7fcc\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.588061 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ph7r\" (UniqueName: \"kubernetes.io/projected/c7e0ce90-fda8-4074-b753-0df1531d7fcc-kube-api-access-4ph7r\") pod \"nova-kuttl-api-0\" (UID: \"c7e0ce90-fda8-4074-b753-0df1531d7fcc\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.588080 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7007af-1e26-4b89-a761-5921086ff009-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"1d7007af-1e26-4b89-a761-5921086ff009\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.588103 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx22b\" (UniqueName: \"kubernetes.io/projected/1d7007af-1e26-4b89-a761-5921086ff009-kube-api-access-sx22b\") pod \"nova-kuttl-metadata-0\" (UID: \"1d7007af-1e26-4b89-a761-5921086ff009\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.588163 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d7007af-1e26-4b89-a761-5921086ff009-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"1d7007af-1e26-4b89-a761-5921086ff009\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.588187 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e0ce90-fda8-4074-b753-0df1531d7fcc-config-data\") pod \"nova-kuttl-api-0\" (UID: \"c7e0ce90-fda8-4074-b753-0df1531d7fcc\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.689855 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d7007af-1e26-4b89-a761-5921086ff009-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"1d7007af-1e26-4b89-a761-5921086ff009\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.689905 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e0ce90-fda8-4074-b753-0df1531d7fcc-config-data\") pod \"nova-kuttl-api-0\" (UID: \"c7e0ce90-fda8-4074-b753-0df1531d7fcc\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.689959 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e0ce90-fda8-4074-b753-0df1531d7fcc-logs\") pod \"nova-kuttl-api-0\" (UID: \"c7e0ce90-fda8-4074-b753-0df1531d7fcc\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.689988 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ph7r\" (UniqueName: \"kubernetes.io/projected/c7e0ce90-fda8-4074-b753-0df1531d7fcc-kube-api-access-4ph7r\") pod \"nova-kuttl-api-0\" (UID: \"c7e0ce90-fda8-4074-b753-0df1531d7fcc\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.690005 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7007af-1e26-4b89-a761-5921086ff009-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"1d7007af-1e26-4b89-a761-5921086ff009\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.690029 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx22b\" (UniqueName: \"kubernetes.io/projected/1d7007af-1e26-4b89-a761-5921086ff009-kube-api-access-sx22b\") pod \"nova-kuttl-metadata-0\" (UID: \"1d7007af-1e26-4b89-a761-5921086ff009\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.690426 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d7007af-1e26-4b89-a761-5921086ff009-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"1d7007af-1e26-4b89-a761-5921086ff009\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.691082 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e0ce90-fda8-4074-b753-0df1531d7fcc-logs\") pod \"nova-kuttl-api-0\" (UID: \"c7e0ce90-fda8-4074-b753-0df1531d7fcc\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.695240 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e0ce90-fda8-4074-b753-0df1531d7fcc-config-data\") pod \"nova-kuttl-api-0\" (UID: \"c7e0ce90-fda8-4074-b753-0df1531d7fcc\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.701515 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d7007af-1e26-4b89-a761-5921086ff009-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"1d7007af-1e26-4b89-a761-5921086ff009\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.712681 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx22b\" (UniqueName: \"kubernetes.io/projected/1d7007af-1e26-4b89-a761-5921086ff009-kube-api-access-sx22b\") pod \"nova-kuttl-metadata-0\" (UID: \"1d7007af-1e26-4b89-a761-5921086ff009\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.714317 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ph7r\" (UniqueName: \"kubernetes.io/projected/c7e0ce90-fda8-4074-b753-0df1531d7fcc-kube-api-access-4ph7r\") pod \"nova-kuttl-api-0\" (UID: \"c7e0ce90-fda8-4074-b753-0df1531d7fcc\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.780658 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.863945 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.874313 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.892567 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq8hn\" (UniqueName: \"kubernetes.io/projected/c02c57c0-1b08-4ba4-9784-ed93d641fd86-kube-api-access-lq8hn\") pod \"c02c57c0-1b08-4ba4-9784-ed93d641fd86\" (UID: \"c02c57c0-1b08-4ba4-9784-ed93d641fd86\") " Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.892678 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02c57c0-1b08-4ba4-9784-ed93d641fd86-config-data\") pod \"c02c57c0-1b08-4ba4-9784-ed93d641fd86\" (UID: \"c02c57c0-1b08-4ba4-9784-ed93d641fd86\") " Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.898868 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c02c57c0-1b08-4ba4-9784-ed93d641fd86-kube-api-access-lq8hn" (OuterVolumeSpecName: "kube-api-access-lq8hn") pod "c02c57c0-1b08-4ba4-9784-ed93d641fd86" (UID: "c02c57c0-1b08-4ba4-9784-ed93d641fd86"). InnerVolumeSpecName "kube-api-access-lq8hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.913043 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c02c57c0-1b08-4ba4-9784-ed93d641fd86-config-data" (OuterVolumeSpecName: "config-data") pod "c02c57c0-1b08-4ba4-9784-ed93d641fd86" (UID: "c02c57c0-1b08-4ba4-9784-ed93d641fd86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.994748 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq8hn\" (UniqueName: \"kubernetes.io/projected/c02c57c0-1b08-4ba4-9784-ed93d641fd86-kube-api-access-lq8hn\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:36 crc kubenswrapper[4786]: I0127 13:39:36.994779 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02c57c0-1b08-4ba4-9784-ed93d641fd86-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.369526 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.376418 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 27 13:39:37 crc kubenswrapper[4786]: W0127 13:39:37.385585 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d7007af_1e26_4b89_a761_5921086ff009.slice/crio-c4f65ef69879b93170167cbe54143c03628e9d91eef7a896c80e64f87659f200 WatchSource:0}: Error finding container c4f65ef69879b93170167cbe54143c03628e9d91eef7a896c80e64f87659f200: Status 404 returned error can't find the container with id c4f65ef69879b93170167cbe54143c03628e9d91eef7a896c80e64f87659f200 Jan 27 13:39:37 crc kubenswrapper[4786]: W0127 13:39:37.389006 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7e0ce90_fda8_4074_b753_0df1531d7fcc.slice/crio-060236c2eee4fe1b111f13150719c73d784f45fb40ec01818de350ca3529b03a WatchSource:0}: Error finding container 060236c2eee4fe1b111f13150719c73d784f45fb40ec01818de350ca3529b03a: Status 404 returned error can't find the container with id 060236c2eee4fe1b111f13150719c73d784f45fb40ec01818de350ca3529b03a Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.425715 4786 generic.go:334] "Generic (PLEG): container finished" podID="c02c57c0-1b08-4ba4-9784-ed93d641fd86" containerID="30ca3652a5aa555228a080670d807333ae7fda6b3c6af8b053f62163b25d1e13" exitCode=0 Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.425785 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.425772 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"c02c57c0-1b08-4ba4-9784-ed93d641fd86","Type":"ContainerDied","Data":"30ca3652a5aa555228a080670d807333ae7fda6b3c6af8b053f62163b25d1e13"} Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.425905 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"c02c57c0-1b08-4ba4-9784-ed93d641fd86","Type":"ContainerDied","Data":"fed1fd49d7d0fdb3c1caecee9af072336e3ac3808896d3bb49d48439a8866d0c"} Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.425932 4786 scope.go:117] "RemoveContainer" containerID="30ca3652a5aa555228a080670d807333ae7fda6b3c6af8b053f62163b25d1e13" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.428857 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"c7e0ce90-fda8-4074-b753-0df1531d7fcc","Type":"ContainerStarted","Data":"060236c2eee4fe1b111f13150719c73d784f45fb40ec01818de350ca3529b03a"} Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.431191 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"1d7007af-1e26-4b89-a761-5921086ff009","Type":"ContainerStarted","Data":"c4f65ef69879b93170167cbe54143c03628e9d91eef7a896c80e64f87659f200"} Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.453939 4786 scope.go:117] "RemoveContainer" containerID="30ca3652a5aa555228a080670d807333ae7fda6b3c6af8b053f62163b25d1e13" Jan 27 13:39:37 crc kubenswrapper[4786]: E0127 13:39:37.454395 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30ca3652a5aa555228a080670d807333ae7fda6b3c6af8b053f62163b25d1e13\": container with ID starting with 30ca3652a5aa555228a080670d807333ae7fda6b3c6af8b053f62163b25d1e13 not found: ID does not exist" containerID="30ca3652a5aa555228a080670d807333ae7fda6b3c6af8b053f62163b25d1e13" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.454441 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30ca3652a5aa555228a080670d807333ae7fda6b3c6af8b053f62163b25d1e13"} err="failed to get container status \"30ca3652a5aa555228a080670d807333ae7fda6b3c6af8b053f62163b25d1e13\": rpc error: code = NotFound desc = could not find container \"30ca3652a5aa555228a080670d807333ae7fda6b3c6af8b053f62163b25d1e13\": container with ID starting with 30ca3652a5aa555228a080670d807333ae7fda6b3c6af8b053f62163b25d1e13 not found: ID does not exist" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.493552 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd" path="/var/lib/kubelet/pods/1e1f79a4-6849-4b07-bb5d-5f5fb787d1fd/volumes" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.494265 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4d08136-66cf-470b-8aa6-b3a59fa583b4" path="/var/lib/kubelet/pods/f4d08136-66cf-470b-8aa6-b3a59fa583b4/volumes" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.495219 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.503988 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.516467 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:39:37 crc kubenswrapper[4786]: E0127 13:39:37.516928 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02c57c0-1b08-4ba4-9784-ed93d641fd86" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.516946 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02c57c0-1b08-4ba4-9784-ed93d641fd86" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.517088 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02c57c0-1b08-4ba4-9784-ed93d641fd86" containerName="nova-kuttl-scheduler-scheduler" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.517709 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.521096 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 27 13:39:37 crc kubenswrapper[4786]: E0127 13:39:37.523564 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc02c57c0_1b08_4ba4_9784_ed93d641fd86.slice\": RecentStats: unable to find data in memory cache]" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.528309 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.603306 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2d365a-46c9-4f47-9501-654446cbd40d-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"4d2d365a-46c9-4f47-9501-654446cbd40d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.603830 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnr86\" (UniqueName: \"kubernetes.io/projected/4d2d365a-46c9-4f47-9501-654446cbd40d-kube-api-access-vnr86\") pod \"nova-kuttl-scheduler-0\" (UID: \"4d2d365a-46c9-4f47-9501-654446cbd40d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.705431 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnr86\" (UniqueName: \"kubernetes.io/projected/4d2d365a-46c9-4f47-9501-654446cbd40d-kube-api-access-vnr86\") pod \"nova-kuttl-scheduler-0\" (UID: \"4d2d365a-46c9-4f47-9501-654446cbd40d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.705533 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2d365a-46c9-4f47-9501-654446cbd40d-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"4d2d365a-46c9-4f47-9501-654446cbd40d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.710216 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2d365a-46c9-4f47-9501-654446cbd40d-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"4d2d365a-46c9-4f47-9501-654446cbd40d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.728007 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnr86\" (UniqueName: \"kubernetes.io/projected/4d2d365a-46c9-4f47-9501-654446cbd40d-kube-api-access-vnr86\") pod \"nova-kuttl-scheduler-0\" (UID: \"4d2d365a-46c9-4f47-9501-654446cbd40d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:37 crc kubenswrapper[4786]: I0127 13:39:37.838259 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:38 crc kubenswrapper[4786]: I0127 13:39:38.080566 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 27 13:39:38 crc kubenswrapper[4786]: W0127 13:39:38.085908 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d2d365a_46c9_4f47_9501_654446cbd40d.slice/crio-65d513bb53fca4b4aba867f16daf7fd6b89a95c158680e71d2505cd87cf11845 WatchSource:0}: Error finding container 65d513bb53fca4b4aba867f16daf7fd6b89a95c158680e71d2505cd87cf11845: Status 404 returned error can't find the container with id 65d513bb53fca4b4aba867f16daf7fd6b89a95c158680e71d2505cd87cf11845 Jan 27 13:39:38 crc kubenswrapper[4786]: I0127 13:39:38.443531 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"1d7007af-1e26-4b89-a761-5921086ff009","Type":"ContainerStarted","Data":"ea705dd144b0bb22dcaac757858627336d962c4b0955c81116966ed60ff3b2a0"} Jan 27 13:39:38 crc kubenswrapper[4786]: I0127 13:39:38.443585 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"1d7007af-1e26-4b89-a761-5921086ff009","Type":"ContainerStarted","Data":"b15036510de82e3e39c304f29c8d3187a7c57040641e202012f016d2532e5a24"} Jan 27 13:39:38 crc kubenswrapper[4786]: I0127 13:39:38.448966 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"4d2d365a-46c9-4f47-9501-654446cbd40d","Type":"ContainerStarted","Data":"93a416458e92f82f9d4663efe1209f107ef2fcb83519345936b18693e019acde"} Jan 27 13:39:38 crc kubenswrapper[4786]: I0127 13:39:38.448998 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"4d2d365a-46c9-4f47-9501-654446cbd40d","Type":"ContainerStarted","Data":"65d513bb53fca4b4aba867f16daf7fd6b89a95c158680e71d2505cd87cf11845"} Jan 27 13:39:38 crc kubenswrapper[4786]: I0127 13:39:38.452428 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"c7e0ce90-fda8-4074-b753-0df1531d7fcc","Type":"ContainerStarted","Data":"78e3262e35ac0470eab2e98e1a2a57b53d8ad93eb898ef592c72ef9845fe48b0"} Jan 27 13:39:38 crc kubenswrapper[4786]: I0127 13:39:38.452461 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"c7e0ce90-fda8-4074-b753-0df1531d7fcc","Type":"ContainerStarted","Data":"f8274e635bafed335b58cfba512a01ca6dfed64c2db5638c9b7d5edfb41e233e"} Jan 27 13:39:38 crc kubenswrapper[4786]: I0127 13:39:38.474007 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.473983471 podStartE2EDuration="2.473983471s" podCreationTimestamp="2026-01-27 13:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:39:38.459578467 +0000 UTC m=+1961.670192586" watchObservedRunningTime="2026-01-27 13:39:38.473983471 +0000 UTC m=+1961.684597590" Jan 27 13:39:38 crc kubenswrapper[4786]: I0127 13:39:38.490155 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.490134334 podStartE2EDuration="1.490134334s" podCreationTimestamp="2026-01-27 13:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:39:38.483951205 +0000 UTC m=+1961.694565324" watchObservedRunningTime="2026-01-27 13:39:38.490134334 +0000 UTC m=+1961.700748453" Jan 27 13:39:38 crc kubenswrapper[4786]: I0127 13:39:38.501922 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.501904506 podStartE2EDuration="2.501904506s" podCreationTimestamp="2026-01-27 13:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:39:38.498898463 +0000 UTC m=+1961.709512582" watchObservedRunningTime="2026-01-27 13:39:38.501904506 +0000 UTC m=+1961.712518625" Jan 27 13:39:39 crc kubenswrapper[4786]: I0127 13:39:39.479802 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c02c57c0-1b08-4ba4-9784-ed93d641fd86" path="/var/lib/kubelet/pods/c02c57c0-1b08-4ba4-9784-ed93d641fd86/volumes" Jan 27 13:39:41 crc kubenswrapper[4786]: I0127 13:39:41.865674 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:41 crc kubenswrapper[4786]: I0127 13:39:41.866074 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:42 crc kubenswrapper[4786]: I0127 13:39:42.839092 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:46 crc kubenswrapper[4786]: I0127 13:39:46.865130 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:46 crc kubenswrapper[4786]: I0127 13:39:46.865459 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:46 crc kubenswrapper[4786]: I0127 13:39:46.874846 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:46 crc kubenswrapper[4786]: I0127 13:39:46.874900 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:47 crc kubenswrapper[4786]: I0127 13:39:47.839630 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:47 crc kubenswrapper[4786]: I0127 13:39:47.865579 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:48 crc kubenswrapper[4786]: I0127 13:39:48.030783 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="1d7007af-1e26-4b89-a761-5921086ff009" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.230:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:39:48 crc kubenswrapper[4786]: I0127 13:39:48.030839 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="1d7007af-1e26-4b89-a761-5921086ff009" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.230:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:39:48 crc kubenswrapper[4786]: I0127 13:39:48.030850 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="c7e0ce90-fda8-4074-b753-0df1531d7fcc" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.231:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:39:48 crc kubenswrapper[4786]: I0127 13:39:48.030916 4786 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="c7e0ce90-fda8-4074-b753-0df1531d7fcc" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.231:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 13:39:48 crc kubenswrapper[4786]: I0127 13:39:48.574267 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 27 13:39:56 crc kubenswrapper[4786]: I0127 13:39:56.867144 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:56 crc kubenswrapper[4786]: I0127 13:39:56.867934 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:56 crc kubenswrapper[4786]: I0127 13:39:56.869745 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:56 crc kubenswrapper[4786]: I0127 13:39:56.870787 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 27 13:39:56 crc kubenswrapper[4786]: I0127 13:39:56.877919 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:56 crc kubenswrapper[4786]: I0127 13:39:56.878252 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:56 crc kubenswrapper[4786]: I0127 13:39:56.878885 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:56 crc kubenswrapper[4786]: I0127 13:39:56.884761 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:57 crc kubenswrapper[4786]: I0127 13:39:57.621426 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:57 crc kubenswrapper[4786]: I0127 13:39:57.624707 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 27 13:39:59 crc kubenswrapper[4786]: I0127 13:39:59.775516 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f"] Jan 27 13:39:59 crc kubenswrapper[4786]: I0127 13:39:59.777172 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" Jan 27 13:39:59 crc kubenswrapper[4786]: I0127 13:39:59.778866 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 27 13:39:59 crc kubenswrapper[4786]: I0127 13:39:59.779457 4786 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 27 13:39:59 crc kubenswrapper[4786]: I0127 13:39:59.789232 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f"] Jan 27 13:39:59 crc kubenswrapper[4786]: I0127 13:39:59.892947 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnknw\" (UniqueName: \"kubernetes.io/projected/89580574-f031-4b88-98e5-18075ffa20c6-kube-api-access-gnknw\") pod \"nova-kuttl-cell1-cell-delete-bmr6f\" (UID: \"89580574-f031-4b88-98e5-18075ffa20c6\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" Jan 27 13:39:59 crc kubenswrapper[4786]: I0127 13:39:59.893080 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89580574-f031-4b88-98e5-18075ffa20c6-scripts\") pod \"nova-kuttl-cell1-cell-delete-bmr6f\" (UID: \"89580574-f031-4b88-98e5-18075ffa20c6\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" Jan 27 13:39:59 crc kubenswrapper[4786]: I0127 13:39:59.893105 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89580574-f031-4b88-98e5-18075ffa20c6-config-data\") pod \"nova-kuttl-cell1-cell-delete-bmr6f\" (UID: \"89580574-f031-4b88-98e5-18075ffa20c6\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" Jan 27 13:39:59 crc kubenswrapper[4786]: I0127 13:39:59.995128 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89580574-f031-4b88-98e5-18075ffa20c6-scripts\") pod \"nova-kuttl-cell1-cell-delete-bmr6f\" (UID: \"89580574-f031-4b88-98e5-18075ffa20c6\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" Jan 27 13:39:59 crc kubenswrapper[4786]: I0127 13:39:59.995177 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89580574-f031-4b88-98e5-18075ffa20c6-config-data\") pod \"nova-kuttl-cell1-cell-delete-bmr6f\" (UID: \"89580574-f031-4b88-98e5-18075ffa20c6\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" Jan 27 13:39:59 crc kubenswrapper[4786]: I0127 13:39:59.995260 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnknw\" (UniqueName: \"kubernetes.io/projected/89580574-f031-4b88-98e5-18075ffa20c6-kube-api-access-gnknw\") pod \"nova-kuttl-cell1-cell-delete-bmr6f\" (UID: \"89580574-f031-4b88-98e5-18075ffa20c6\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" Jan 27 13:40:00 crc kubenswrapper[4786]: I0127 13:40:00.006823 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89580574-f031-4b88-98e5-18075ffa20c6-config-data\") pod \"nova-kuttl-cell1-cell-delete-bmr6f\" (UID: \"89580574-f031-4b88-98e5-18075ffa20c6\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" Jan 27 13:40:00 crc kubenswrapper[4786]: I0127 13:40:00.007121 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89580574-f031-4b88-98e5-18075ffa20c6-scripts\") pod \"nova-kuttl-cell1-cell-delete-bmr6f\" (UID: \"89580574-f031-4b88-98e5-18075ffa20c6\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" Jan 27 13:40:00 crc kubenswrapper[4786]: I0127 13:40:00.015433 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnknw\" (UniqueName: \"kubernetes.io/projected/89580574-f031-4b88-98e5-18075ffa20c6-kube-api-access-gnknw\") pod \"nova-kuttl-cell1-cell-delete-bmr6f\" (UID: \"89580574-f031-4b88-98e5-18075ffa20c6\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" Jan 27 13:40:00 crc kubenswrapper[4786]: I0127 13:40:00.106945 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" Jan 27 13:40:00 crc kubenswrapper[4786]: I0127 13:40:00.530152 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f"] Jan 27 13:40:00 crc kubenswrapper[4786]: I0127 13:40:00.646835 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" event={"ID":"89580574-f031-4b88-98e5-18075ffa20c6","Type":"ContainerStarted","Data":"aaf8cfcdc930147123fad8fa7a1c95cd628f321f880a66113e73b0bbd6f66f1c"} Jan 27 13:40:01 crc kubenswrapper[4786]: I0127 13:40:01.655471 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" event={"ID":"89580574-f031-4b88-98e5-18075ffa20c6","Type":"ContainerStarted","Data":"90001aed6a38544b48d347ef02d8f1f78e0a9934313b13d3bc4b43ae58eb5396"} Jan 27 13:40:01 crc kubenswrapper[4786]: I0127 13:40:01.673472 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podStartSLOduration=2.67345847 podStartE2EDuration="2.67345847s" podCreationTimestamp="2026-01-27 13:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:40:01.669668576 +0000 UTC m=+1984.880282695" watchObservedRunningTime="2026-01-27 13:40:01.67345847 +0000 UTC m=+1984.884072589" Jan 27 13:40:05 crc kubenswrapper[4786]: I0127 13:40:05.699074 4786 generic.go:334] "Generic (PLEG): container finished" podID="89580574-f031-4b88-98e5-18075ffa20c6" containerID="90001aed6a38544b48d347ef02d8f1f78e0a9934313b13d3bc4b43ae58eb5396" exitCode=2 Jan 27 13:40:05 crc kubenswrapper[4786]: I0127 13:40:05.699128 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" event={"ID":"89580574-f031-4b88-98e5-18075ffa20c6","Type":"ContainerDied","Data":"90001aed6a38544b48d347ef02d8f1f78e0a9934313b13d3bc4b43ae58eb5396"} Jan 27 13:40:05 crc kubenswrapper[4786]: I0127 13:40:05.701080 4786 scope.go:117] "RemoveContainer" containerID="90001aed6a38544b48d347ef02d8f1f78e0a9934313b13d3bc4b43ae58eb5396" Jan 27 13:40:06 crc kubenswrapper[4786]: I0127 13:40:06.710994 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" event={"ID":"89580574-f031-4b88-98e5-18075ffa20c6","Type":"ContainerStarted","Data":"5fe628af942e4ba5e59dab1f92c06f0d278290b9d90dadfa8819c4d82d5f161d"} Jan 27 13:40:10 crc kubenswrapper[4786]: I0127 13:40:10.751098 4786 generic.go:334] "Generic (PLEG): container finished" podID="89580574-f031-4b88-98e5-18075ffa20c6" containerID="5fe628af942e4ba5e59dab1f92c06f0d278290b9d90dadfa8819c4d82d5f161d" exitCode=2 Jan 27 13:40:10 crc kubenswrapper[4786]: I0127 13:40:10.751184 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" event={"ID":"89580574-f031-4b88-98e5-18075ffa20c6","Type":"ContainerDied","Data":"5fe628af942e4ba5e59dab1f92c06f0d278290b9d90dadfa8819c4d82d5f161d"} Jan 27 13:40:10 crc kubenswrapper[4786]: I0127 13:40:10.751720 4786 scope.go:117] "RemoveContainer" containerID="90001aed6a38544b48d347ef02d8f1f78e0a9934313b13d3bc4b43ae58eb5396" Jan 27 13:40:10 crc kubenswrapper[4786]: I0127 13:40:10.752264 4786 scope.go:117] "RemoveContainer" containerID="5fe628af942e4ba5e59dab1f92c06f0d278290b9d90dadfa8819c4d82d5f161d" Jan 27 13:40:10 crc kubenswrapper[4786]: E0127 13:40:10.752472 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:40:15 crc kubenswrapper[4786]: I0127 13:40:15.342825 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cm5h7"] Jan 27 13:40:15 crc kubenswrapper[4786]: I0127 13:40:15.347084 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:15 crc kubenswrapper[4786]: I0127 13:40:15.353340 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cm5h7"] Jan 27 13:40:15 crc kubenswrapper[4786]: I0127 13:40:15.450029 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsgrz\" (UniqueName: \"kubernetes.io/projected/563764d9-352e-45a8-9f42-895636610cf4-kube-api-access-fsgrz\") pod \"redhat-operators-cm5h7\" (UID: \"563764d9-352e-45a8-9f42-895636610cf4\") " pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:15 crc kubenswrapper[4786]: I0127 13:40:15.450181 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/563764d9-352e-45a8-9f42-895636610cf4-utilities\") pod \"redhat-operators-cm5h7\" (UID: \"563764d9-352e-45a8-9f42-895636610cf4\") " pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:15 crc kubenswrapper[4786]: I0127 13:40:15.450247 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/563764d9-352e-45a8-9f42-895636610cf4-catalog-content\") pod \"redhat-operators-cm5h7\" (UID: \"563764d9-352e-45a8-9f42-895636610cf4\") " pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:15 crc kubenswrapper[4786]: I0127 13:40:15.551411 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsgrz\" (UniqueName: \"kubernetes.io/projected/563764d9-352e-45a8-9f42-895636610cf4-kube-api-access-fsgrz\") pod \"redhat-operators-cm5h7\" (UID: \"563764d9-352e-45a8-9f42-895636610cf4\") " pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:15 crc kubenswrapper[4786]: I0127 13:40:15.551539 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/563764d9-352e-45a8-9f42-895636610cf4-utilities\") pod \"redhat-operators-cm5h7\" (UID: \"563764d9-352e-45a8-9f42-895636610cf4\") " pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:15 crc kubenswrapper[4786]: I0127 13:40:15.551594 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/563764d9-352e-45a8-9f42-895636610cf4-catalog-content\") pod \"redhat-operators-cm5h7\" (UID: \"563764d9-352e-45a8-9f42-895636610cf4\") " pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:15 crc kubenswrapper[4786]: I0127 13:40:15.552101 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/563764d9-352e-45a8-9f42-895636610cf4-catalog-content\") pod \"redhat-operators-cm5h7\" (UID: \"563764d9-352e-45a8-9f42-895636610cf4\") " pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:15 crc kubenswrapper[4786]: I0127 13:40:15.552743 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/563764d9-352e-45a8-9f42-895636610cf4-utilities\") pod \"redhat-operators-cm5h7\" (UID: \"563764d9-352e-45a8-9f42-895636610cf4\") " pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:15 crc kubenswrapper[4786]: I0127 13:40:15.574547 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsgrz\" (UniqueName: \"kubernetes.io/projected/563764d9-352e-45a8-9f42-895636610cf4-kube-api-access-fsgrz\") pod \"redhat-operators-cm5h7\" (UID: \"563764d9-352e-45a8-9f42-895636610cf4\") " pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:15 crc kubenswrapper[4786]: I0127 13:40:15.664375 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:16 crc kubenswrapper[4786]: I0127 13:40:16.121552 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cm5h7"] Jan 27 13:40:16 crc kubenswrapper[4786]: W0127 13:40:16.122326 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod563764d9_352e_45a8_9f42_895636610cf4.slice/crio-01f8c6bf0492a42397b54198ba899082aa92fb4723f4976e3b1a2c16955c834f WatchSource:0}: Error finding container 01f8c6bf0492a42397b54198ba899082aa92fb4723f4976e3b1a2c16955c834f: Status 404 returned error can't find the container with id 01f8c6bf0492a42397b54198ba899082aa92fb4723f4976e3b1a2c16955c834f Jan 27 13:40:16 crc kubenswrapper[4786]: I0127 13:40:16.809045 4786 generic.go:334] "Generic (PLEG): container finished" podID="563764d9-352e-45a8-9f42-895636610cf4" containerID="bd185bef350aad62fd55cec1c1deb6a29bd16d0631fd99303592667ae5d9453d" exitCode=0 Jan 27 13:40:16 crc kubenswrapper[4786]: I0127 13:40:16.809336 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm5h7" event={"ID":"563764d9-352e-45a8-9f42-895636610cf4","Type":"ContainerDied","Data":"bd185bef350aad62fd55cec1c1deb6a29bd16d0631fd99303592667ae5d9453d"} Jan 27 13:40:16 crc kubenswrapper[4786]: I0127 13:40:16.809363 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm5h7" event={"ID":"563764d9-352e-45a8-9f42-895636610cf4","Type":"ContainerStarted","Data":"01f8c6bf0492a42397b54198ba899082aa92fb4723f4976e3b1a2c16955c834f"} Jan 27 13:40:18 crc kubenswrapper[4786]: I0127 13:40:18.827472 4786 generic.go:334] "Generic (PLEG): container finished" podID="563764d9-352e-45a8-9f42-895636610cf4" containerID="7b237ab0ab64d6f8798b72f15e23855c1981316724b95a6d071d1277d34ded7c" exitCode=0 Jan 27 13:40:18 crc kubenswrapper[4786]: I0127 13:40:18.827583 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm5h7" event={"ID":"563764d9-352e-45a8-9f42-895636610cf4","Type":"ContainerDied","Data":"7b237ab0ab64d6f8798b72f15e23855c1981316724b95a6d071d1277d34ded7c"} Jan 27 13:40:19 crc kubenswrapper[4786]: I0127 13:40:19.843392 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm5h7" event={"ID":"563764d9-352e-45a8-9f42-895636610cf4","Type":"ContainerStarted","Data":"435f7363b3b613fd9a18c3e64ca3512a070d95a3f52d8c1d6ad17a45901efd9e"} Jan 27 13:40:19 crc kubenswrapper[4786]: I0127 13:40:19.870401 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cm5h7" podStartSLOduration=2.398072964 podStartE2EDuration="4.870378375s" podCreationTimestamp="2026-01-27 13:40:15 +0000 UTC" firstStartedPulling="2026-01-27 13:40:16.811012011 +0000 UTC m=+2000.021626130" lastFinishedPulling="2026-01-27 13:40:19.283317412 +0000 UTC m=+2002.493931541" observedRunningTime="2026-01-27 13:40:19.860743171 +0000 UTC m=+2003.071357310" watchObservedRunningTime="2026-01-27 13:40:19.870378375 +0000 UTC m=+2003.080992494" Jan 27 13:40:24 crc kubenswrapper[4786]: I0127 13:40:24.465443 4786 scope.go:117] "RemoveContainer" containerID="5fe628af942e4ba5e59dab1f92c06f0d278290b9d90dadfa8819c4d82d5f161d" Jan 27 13:40:25 crc kubenswrapper[4786]: I0127 13:40:25.665429 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:25 crc kubenswrapper[4786]: I0127 13:40:25.668181 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:25 crc kubenswrapper[4786]: I0127 13:40:25.710947 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:25 crc kubenswrapper[4786]: I0127 13:40:25.902475 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" event={"ID":"89580574-f031-4b88-98e5-18075ffa20c6","Type":"ContainerStarted","Data":"b3fd09fae5fd1f7b42a52b1a40e40497c84814d459766567b551babe818cd6a9"} Jan 27 13:40:25 crc kubenswrapper[4786]: I0127 13:40:25.946509 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:25 crc kubenswrapper[4786]: I0127 13:40:25.987744 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cm5h7"] Jan 27 13:40:27 crc kubenswrapper[4786]: I0127 13:40:27.918747 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cm5h7" podUID="563764d9-352e-45a8-9f42-895636610cf4" containerName="registry-server" containerID="cri-o://435f7363b3b613fd9a18c3e64ca3512a070d95a3f52d8c1d6ad17a45901efd9e" gracePeriod=2 Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.332194 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.338458 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsgrz\" (UniqueName: \"kubernetes.io/projected/563764d9-352e-45a8-9f42-895636610cf4-kube-api-access-fsgrz\") pod \"563764d9-352e-45a8-9f42-895636610cf4\" (UID: \"563764d9-352e-45a8-9f42-895636610cf4\") " Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.339003 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/563764d9-352e-45a8-9f42-895636610cf4-utilities\") pod \"563764d9-352e-45a8-9f42-895636610cf4\" (UID: \"563764d9-352e-45a8-9f42-895636610cf4\") " Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.339082 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/563764d9-352e-45a8-9f42-895636610cf4-catalog-content\") pod \"563764d9-352e-45a8-9f42-895636610cf4\" (UID: \"563764d9-352e-45a8-9f42-895636610cf4\") " Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.339921 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/563764d9-352e-45a8-9f42-895636610cf4-utilities" (OuterVolumeSpecName: "utilities") pod "563764d9-352e-45a8-9f42-895636610cf4" (UID: "563764d9-352e-45a8-9f42-895636610cf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.347085 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/563764d9-352e-45a8-9f42-895636610cf4-kube-api-access-fsgrz" (OuterVolumeSpecName: "kube-api-access-fsgrz") pod "563764d9-352e-45a8-9f42-895636610cf4" (UID: "563764d9-352e-45a8-9f42-895636610cf4"). InnerVolumeSpecName "kube-api-access-fsgrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.440181 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsgrz\" (UniqueName: \"kubernetes.io/projected/563764d9-352e-45a8-9f42-895636610cf4-kube-api-access-fsgrz\") on node \"crc\" DevicePath \"\"" Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.440205 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/563764d9-352e-45a8-9f42-895636610cf4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.495047 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/563764d9-352e-45a8-9f42-895636610cf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "563764d9-352e-45a8-9f42-895636610cf4" (UID: "563764d9-352e-45a8-9f42-895636610cf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.540848 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/563764d9-352e-45a8-9f42-895636610cf4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.930184 4786 generic.go:334] "Generic (PLEG): container finished" podID="563764d9-352e-45a8-9f42-895636610cf4" containerID="435f7363b3b613fd9a18c3e64ca3512a070d95a3f52d8c1d6ad17a45901efd9e" exitCode=0 Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.930559 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm5h7" event={"ID":"563764d9-352e-45a8-9f42-895636610cf4","Type":"ContainerDied","Data":"435f7363b3b613fd9a18c3e64ca3512a070d95a3f52d8c1d6ad17a45901efd9e"} Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.930642 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm5h7" event={"ID":"563764d9-352e-45a8-9f42-895636610cf4","Type":"ContainerDied","Data":"01f8c6bf0492a42397b54198ba899082aa92fb4723f4976e3b1a2c16955c834f"} Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.930666 4786 scope.go:117] "RemoveContainer" containerID="435f7363b3b613fd9a18c3e64ca3512a070d95a3f52d8c1d6ad17a45901efd9e" Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.930888 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cm5h7" Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.950376 4786 scope.go:117] "RemoveContainer" containerID="7b237ab0ab64d6f8798b72f15e23855c1981316724b95a6d071d1277d34ded7c" Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.975277 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cm5h7"] Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.982030 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cm5h7"] Jan 27 13:40:28 crc kubenswrapper[4786]: I0127 13:40:28.987640 4786 scope.go:117] "RemoveContainer" containerID="bd185bef350aad62fd55cec1c1deb6a29bd16d0631fd99303592667ae5d9453d" Jan 27 13:40:29 crc kubenswrapper[4786]: I0127 13:40:29.015190 4786 scope.go:117] "RemoveContainer" containerID="435f7363b3b613fd9a18c3e64ca3512a070d95a3f52d8c1d6ad17a45901efd9e" Jan 27 13:40:29 crc kubenswrapper[4786]: E0127 13:40:29.015708 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"435f7363b3b613fd9a18c3e64ca3512a070d95a3f52d8c1d6ad17a45901efd9e\": container with ID starting with 435f7363b3b613fd9a18c3e64ca3512a070d95a3f52d8c1d6ad17a45901efd9e not found: ID does not exist" containerID="435f7363b3b613fd9a18c3e64ca3512a070d95a3f52d8c1d6ad17a45901efd9e" Jan 27 13:40:29 crc kubenswrapper[4786]: I0127 13:40:29.015738 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"435f7363b3b613fd9a18c3e64ca3512a070d95a3f52d8c1d6ad17a45901efd9e"} err="failed to get container status \"435f7363b3b613fd9a18c3e64ca3512a070d95a3f52d8c1d6ad17a45901efd9e\": rpc error: code = NotFound desc = could not find container \"435f7363b3b613fd9a18c3e64ca3512a070d95a3f52d8c1d6ad17a45901efd9e\": container with ID starting with 435f7363b3b613fd9a18c3e64ca3512a070d95a3f52d8c1d6ad17a45901efd9e not found: ID does not exist" Jan 27 13:40:29 crc kubenswrapper[4786]: I0127 13:40:29.015758 4786 scope.go:117] "RemoveContainer" containerID="7b237ab0ab64d6f8798b72f15e23855c1981316724b95a6d071d1277d34ded7c" Jan 27 13:40:29 crc kubenswrapper[4786]: E0127 13:40:29.016078 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b237ab0ab64d6f8798b72f15e23855c1981316724b95a6d071d1277d34ded7c\": container with ID starting with 7b237ab0ab64d6f8798b72f15e23855c1981316724b95a6d071d1277d34ded7c not found: ID does not exist" containerID="7b237ab0ab64d6f8798b72f15e23855c1981316724b95a6d071d1277d34ded7c" Jan 27 13:40:29 crc kubenswrapper[4786]: I0127 13:40:29.016100 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b237ab0ab64d6f8798b72f15e23855c1981316724b95a6d071d1277d34ded7c"} err="failed to get container status \"7b237ab0ab64d6f8798b72f15e23855c1981316724b95a6d071d1277d34ded7c\": rpc error: code = NotFound desc = could not find container \"7b237ab0ab64d6f8798b72f15e23855c1981316724b95a6d071d1277d34ded7c\": container with ID starting with 7b237ab0ab64d6f8798b72f15e23855c1981316724b95a6d071d1277d34ded7c not found: ID does not exist" Jan 27 13:40:29 crc kubenswrapper[4786]: I0127 13:40:29.016113 4786 scope.go:117] "RemoveContainer" containerID="bd185bef350aad62fd55cec1c1deb6a29bd16d0631fd99303592667ae5d9453d" Jan 27 13:40:29 crc kubenswrapper[4786]: E0127 13:40:29.016349 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd185bef350aad62fd55cec1c1deb6a29bd16d0631fd99303592667ae5d9453d\": container with ID starting with bd185bef350aad62fd55cec1c1deb6a29bd16d0631fd99303592667ae5d9453d not found: ID does not exist" containerID="bd185bef350aad62fd55cec1c1deb6a29bd16d0631fd99303592667ae5d9453d" Jan 27 13:40:29 crc kubenswrapper[4786]: I0127 13:40:29.016373 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd185bef350aad62fd55cec1c1deb6a29bd16d0631fd99303592667ae5d9453d"} err="failed to get container status \"bd185bef350aad62fd55cec1c1deb6a29bd16d0631fd99303592667ae5d9453d\": rpc error: code = NotFound desc = could not find container \"bd185bef350aad62fd55cec1c1deb6a29bd16d0631fd99303592667ae5d9453d\": container with ID starting with bd185bef350aad62fd55cec1c1deb6a29bd16d0631fd99303592667ae5d9453d not found: ID does not exist" Jan 27 13:40:29 crc kubenswrapper[4786]: I0127 13:40:29.476945 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="563764d9-352e-45a8-9f42-895636610cf4" path="/var/lib/kubelet/pods/563764d9-352e-45a8-9f42-895636610cf4/volumes" Jan 27 13:40:30 crc kubenswrapper[4786]: I0127 13:40:30.950146 4786 generic.go:334] "Generic (PLEG): container finished" podID="89580574-f031-4b88-98e5-18075ffa20c6" containerID="b3fd09fae5fd1f7b42a52b1a40e40497c84814d459766567b551babe818cd6a9" exitCode=2 Jan 27 13:40:30 crc kubenswrapper[4786]: I0127 13:40:30.950182 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" event={"ID":"89580574-f031-4b88-98e5-18075ffa20c6","Type":"ContainerDied","Data":"b3fd09fae5fd1f7b42a52b1a40e40497c84814d459766567b551babe818cd6a9"} Jan 27 13:40:30 crc kubenswrapper[4786]: I0127 13:40:30.950556 4786 scope.go:117] "RemoveContainer" containerID="5fe628af942e4ba5e59dab1f92c06f0d278290b9d90dadfa8819c4d82d5f161d" Jan 27 13:40:30 crc kubenswrapper[4786]: I0127 13:40:30.951406 4786 scope.go:117] "RemoveContainer" containerID="b3fd09fae5fd1f7b42a52b1a40e40497c84814d459766567b551babe818cd6a9" Jan 27 13:40:30 crc kubenswrapper[4786]: E0127 13:40:30.951672 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:40:46 crc kubenswrapper[4786]: I0127 13:40:46.465479 4786 scope.go:117] "RemoveContainer" containerID="b3fd09fae5fd1f7b42a52b1a40e40497c84814d459766567b551babe818cd6a9" Jan 27 13:40:46 crc kubenswrapper[4786]: E0127 13:40:46.466297 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:40:59 crc kubenswrapper[4786]: I0127 13:40:59.997452 4786 scope.go:117] "RemoveContainer" containerID="726ebba3e24e1b1f548bcd5d15f0128d37425cb3a8dbec8915fa0f08efa951a8" Jan 27 13:41:00 crc kubenswrapper[4786]: I0127 13:41:00.032009 4786 scope.go:117] "RemoveContainer" containerID="73922993335a58cfa0e71a3a13cf37d6d984409db010bb4141672ea3810b86c9" Jan 27 13:41:00 crc kubenswrapper[4786]: I0127 13:41:00.069494 4786 scope.go:117] "RemoveContainer" containerID="54f5da1979232a43b00995472025f54884941588c466902d8d5d23dd25d96a28" Jan 27 13:41:00 crc kubenswrapper[4786]: I0127 13:41:00.085735 4786 scope.go:117] "RemoveContainer" containerID="5271c694b2af73ba565f0638988720da96c7ed728357af14050c55179f143e5a" Jan 27 13:41:00 crc kubenswrapper[4786]: I0127 13:41:00.138334 4786 scope.go:117] "RemoveContainer" containerID="bdd86bc29ccd6afdb3efaac74980f39bf86e6f010822c5ac4fb1df9dc2e6102a" Jan 27 13:41:00 crc kubenswrapper[4786]: I0127 13:41:00.155410 4786 scope.go:117] "RemoveContainer" containerID="1f73b1e2c2c8dc08b8a0e64743aa5b7bb37d369a106b64ac7dba2966f385f0ba" Jan 27 13:41:00 crc kubenswrapper[4786]: I0127 13:41:00.188697 4786 scope.go:117] "RemoveContainer" containerID="63355997a7f7dbb77d495c3c9d68ec163a663716f067441326a30fea0b9b1c5b" Jan 27 13:41:00 crc kubenswrapper[4786]: I0127 13:41:00.205996 4786 scope.go:117] "RemoveContainer" containerID="ead088a3d806480e2ba8b3a9e8104ee8564a35df411737e783d68946d6a63597" Jan 27 13:41:00 crc kubenswrapper[4786]: I0127 13:41:00.243704 4786 scope.go:117] "RemoveContainer" containerID="549a40290a02d4347f283c02d433598be3a292f571f46e337e9d04a8a383df07" Jan 27 13:41:00 crc kubenswrapper[4786]: I0127 13:41:00.258737 4786 scope.go:117] "RemoveContainer" containerID="d4bfc1c5e8bba5e7a80de5dbc3a87cc974cc68affe27f4446e153d432c328d18" Jan 27 13:41:00 crc kubenswrapper[4786]: I0127 13:41:00.274165 4786 scope.go:117] "RemoveContainer" containerID="ecef8bf18aadd498025e5a36e1ad6435e794efe39cbbdc30073a2d53d0862373" Jan 27 13:41:00 crc kubenswrapper[4786]: I0127 13:41:00.290598 4786 scope.go:117] "RemoveContainer" containerID="86528360438e8106ed778f3c5df213aeebea35ac07fe4c6f3a819e8e5c345ebf" Jan 27 13:41:00 crc kubenswrapper[4786]: I0127 13:41:00.324312 4786 scope.go:117] "RemoveContainer" containerID="20361a142a4f796647a628117b87c11b058d26cba7625f4f0d9c6430641a5e60" Jan 27 13:41:00 crc kubenswrapper[4786]: I0127 13:41:00.344071 4786 scope.go:117] "RemoveContainer" containerID="4b96c36d6a72849123bcc80fe16fb9324b0f3f88c937d88665b6ff353187964a" Jan 27 13:41:00 crc kubenswrapper[4786]: I0127 13:41:00.361538 4786 scope.go:117] "RemoveContainer" containerID="e1dc671ff55f64b496f63a9fbbc84a50742ef5ca38387bd931e2e8f01f0ce460" Jan 27 13:41:01 crc kubenswrapper[4786]: I0127 13:41:01.465446 4786 scope.go:117] "RemoveContainer" containerID="b3fd09fae5fd1f7b42a52b1a40e40497c84814d459766567b551babe818cd6a9" Jan 27 13:41:02 crc kubenswrapper[4786]: I0127 13:41:02.253235 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" event={"ID":"89580574-f031-4b88-98e5-18075ffa20c6","Type":"ContainerStarted","Data":"51bfa5d2d6c5d8c80adee4477d247c1723d57e1d72e9a114223b02be4fdd9502"} Jan 27 13:41:07 crc kubenswrapper[4786]: I0127 13:41:07.293196 4786 generic.go:334] "Generic (PLEG): container finished" podID="89580574-f031-4b88-98e5-18075ffa20c6" containerID="51bfa5d2d6c5d8c80adee4477d247c1723d57e1d72e9a114223b02be4fdd9502" exitCode=2 Jan 27 13:41:07 crc kubenswrapper[4786]: I0127 13:41:07.293289 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" event={"ID":"89580574-f031-4b88-98e5-18075ffa20c6","Type":"ContainerDied","Data":"51bfa5d2d6c5d8c80adee4477d247c1723d57e1d72e9a114223b02be4fdd9502"} Jan 27 13:41:07 crc kubenswrapper[4786]: I0127 13:41:07.293516 4786 scope.go:117] "RemoveContainer" containerID="b3fd09fae5fd1f7b42a52b1a40e40497c84814d459766567b551babe818cd6a9" Jan 27 13:41:07 crc kubenswrapper[4786]: I0127 13:41:07.294029 4786 scope.go:117] "RemoveContainer" containerID="51bfa5d2d6c5d8c80adee4477d247c1723d57e1d72e9a114223b02be4fdd9502" Jan 27 13:41:07 crc kubenswrapper[4786]: E0127 13:41:07.294355 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:41:09 crc kubenswrapper[4786]: I0127 13:41:09.532257 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:41:09 crc kubenswrapper[4786]: I0127 13:41:09.532979 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:41:19 crc kubenswrapper[4786]: I0127 13:41:19.464667 4786 scope.go:117] "RemoveContainer" containerID="51bfa5d2d6c5d8c80adee4477d247c1723d57e1d72e9a114223b02be4fdd9502" Jan 27 13:41:19 crc kubenswrapper[4786]: E0127 13:41:19.465385 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:41:34 crc kubenswrapper[4786]: I0127 13:41:34.464760 4786 scope.go:117] "RemoveContainer" containerID="51bfa5d2d6c5d8c80adee4477d247c1723d57e1d72e9a114223b02be4fdd9502" Jan 27 13:41:34 crc kubenswrapper[4786]: E0127 13:41:34.465492 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:41:39 crc kubenswrapper[4786]: I0127 13:41:39.532952 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:41:39 crc kubenswrapper[4786]: I0127 13:41:39.534586 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:41:48 crc kubenswrapper[4786]: I0127 13:41:48.464843 4786 scope.go:117] "RemoveContainer" containerID="51bfa5d2d6c5d8c80adee4477d247c1723d57e1d72e9a114223b02be4fdd9502" Jan 27 13:41:48 crc kubenswrapper[4786]: I0127 13:41:48.804998 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" event={"ID":"89580574-f031-4b88-98e5-18075ffa20c6","Type":"ContainerStarted","Data":"940bc77603b9c5b3db3920682c34b52841d1a8d1220862773ac52a4bd82cfebe"} Jan 27 13:41:53 crc kubenswrapper[4786]: I0127 13:41:53.843895 4786 generic.go:334] "Generic (PLEG): container finished" podID="89580574-f031-4b88-98e5-18075ffa20c6" containerID="940bc77603b9c5b3db3920682c34b52841d1a8d1220862773ac52a4bd82cfebe" exitCode=2 Jan 27 13:41:53 crc kubenswrapper[4786]: I0127 13:41:53.843977 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" event={"ID":"89580574-f031-4b88-98e5-18075ffa20c6","Type":"ContainerDied","Data":"940bc77603b9c5b3db3920682c34b52841d1a8d1220862773ac52a4bd82cfebe"} Jan 27 13:41:53 crc kubenswrapper[4786]: I0127 13:41:53.844439 4786 scope.go:117] "RemoveContainer" containerID="51bfa5d2d6c5d8c80adee4477d247c1723d57e1d72e9a114223b02be4fdd9502" Jan 27 13:41:53 crc kubenswrapper[4786]: I0127 13:41:53.845154 4786 scope.go:117] "RemoveContainer" containerID="940bc77603b9c5b3db3920682c34b52841d1a8d1220862773ac52a4bd82cfebe" Jan 27 13:41:53 crc kubenswrapper[4786]: E0127 13:41:53.845481 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:42:00 crc kubenswrapper[4786]: I0127 13:42:00.570949 4786 scope.go:117] "RemoveContainer" containerID="fd7331605674ef0bcca5c26e92d50aa1d9e1fc56b71de769e78852c825b29778" Jan 27 13:42:04 crc kubenswrapper[4786]: I0127 13:42:04.464463 4786 scope.go:117] "RemoveContainer" containerID="940bc77603b9c5b3db3920682c34b52841d1a8d1220862773ac52a4bd82cfebe" Jan 27 13:42:04 crc kubenswrapper[4786]: E0127 13:42:04.465814 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:42:09 crc kubenswrapper[4786]: I0127 13:42:09.532794 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:42:09 crc kubenswrapper[4786]: I0127 13:42:09.533128 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:42:09 crc kubenswrapper[4786]: I0127 13:42:09.533173 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:42:09 crc kubenswrapper[4786]: I0127 13:42:09.533829 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a4c20f9aecd7c4c87050e3915ee6dd31d35d262bd6e9faa3426ca360fe926a7"} pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:42:09 crc kubenswrapper[4786]: I0127 13:42:09.533890 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" containerID="cri-o://5a4c20f9aecd7c4c87050e3915ee6dd31d35d262bd6e9faa3426ca360fe926a7" gracePeriod=600 Jan 27 13:42:09 crc kubenswrapper[4786]: I0127 13:42:09.973677 4786 generic.go:334] "Generic (PLEG): container finished" podID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerID="5a4c20f9aecd7c4c87050e3915ee6dd31d35d262bd6e9faa3426ca360fe926a7" exitCode=0 Jan 27 13:42:09 crc kubenswrapper[4786]: I0127 13:42:09.973757 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerDied","Data":"5a4c20f9aecd7c4c87050e3915ee6dd31d35d262bd6e9faa3426ca360fe926a7"} Jan 27 13:42:09 crc kubenswrapper[4786]: I0127 13:42:09.973961 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerStarted","Data":"9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d"} Jan 27 13:42:09 crc kubenswrapper[4786]: I0127 13:42:09.973985 4786 scope.go:117] "RemoveContainer" containerID="2f94fef961bba31453718ee505c65aba4d5e2d46c581dccdf2daaebbe3a39906" Jan 27 13:42:16 crc kubenswrapper[4786]: I0127 13:42:16.465229 4786 scope.go:117] "RemoveContainer" containerID="940bc77603b9c5b3db3920682c34b52841d1a8d1220862773ac52a4bd82cfebe" Jan 27 13:42:16 crc kubenswrapper[4786]: E0127 13:42:16.466733 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:42:31 crc kubenswrapper[4786]: I0127 13:42:31.465480 4786 scope.go:117] "RemoveContainer" containerID="940bc77603b9c5b3db3920682c34b52841d1a8d1220862773ac52a4bd82cfebe" Jan 27 13:42:31 crc kubenswrapper[4786]: E0127 13:42:31.466270 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:42:43 crc kubenswrapper[4786]: I0127 13:42:43.465736 4786 scope.go:117] "RemoveContainer" containerID="940bc77603b9c5b3db3920682c34b52841d1a8d1220862773ac52a4bd82cfebe" Jan 27 13:42:43 crc kubenswrapper[4786]: E0127 13:42:43.466976 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:42:58 crc kubenswrapper[4786]: I0127 13:42:58.464251 4786 scope.go:117] "RemoveContainer" containerID="940bc77603b9c5b3db3920682c34b52841d1a8d1220862773ac52a4bd82cfebe" Jan 27 13:42:58 crc kubenswrapper[4786]: E0127 13:42:58.464878 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:43:00 crc kubenswrapper[4786]: I0127 13:43:00.634415 4786 scope.go:117] "RemoveContainer" containerID="7aa8402801ee2c611c7f5a6909110ec3bccafb09c39ffa1c45aa598b5bddd0c8" Jan 27 13:43:00 crc kubenswrapper[4786]: I0127 13:43:00.660782 4786 scope.go:117] "RemoveContainer" containerID="5dd0eaa5f48cde69d4caa5058c8b1b5d4d7ed7fc75c93286097b843daab9eb19" Jan 27 13:43:00 crc kubenswrapper[4786]: I0127 13:43:00.696289 4786 scope.go:117] "RemoveContainer" containerID="5d40b8ee11a709c6f836bdee33d7d12b9c786b349ca313b826c8d031140d94b7" Jan 27 13:43:00 crc kubenswrapper[4786]: I0127 13:43:00.744095 4786 scope.go:117] "RemoveContainer" containerID="419e5894d5218d7fbfb6f86d65d3618a76b709e1a328f8cdfe1b1406bd74712d" Jan 27 13:43:00 crc kubenswrapper[4786]: I0127 13:43:00.786091 4786 scope.go:117] "RemoveContainer" containerID="f695fbe538bb43fad1842572a2a1a58bc09a50d4d003d2e749bbfd23d386ccd4" Jan 27 13:43:00 crc kubenswrapper[4786]: I0127 13:43:00.800963 4786 scope.go:117] "RemoveContainer" containerID="15b4207e9e3b5f257f21099e65296e910375b9954a8c148f92cf62f9e6e7ae0e" Jan 27 13:43:00 crc kubenswrapper[4786]: I0127 13:43:00.835519 4786 scope.go:117] "RemoveContainer" containerID="b2cad2ce810ffa2641f2961cbc00d46bcfca6d63116d9f257f1e11e6d1423c93" Jan 27 13:43:00 crc kubenswrapper[4786]: I0127 13:43:00.852785 4786 scope.go:117] "RemoveContainer" containerID="a72ad032f6e5e09bb494e8d26bcb346de2e4f33e093eed9843c72401c8827068" Jan 27 13:43:00 crc kubenswrapper[4786]: I0127 13:43:00.870291 4786 scope.go:117] "RemoveContainer" containerID="8b9cbc11753b921be12e241a470e6262e65a5b14ca7d199183aa5fca06cf8e5a" Jan 27 13:43:00 crc kubenswrapper[4786]: I0127 13:43:00.885377 4786 scope.go:117] "RemoveContainer" containerID="85663c4bbed577d667088115352b6b9abaa15f9794edc68db0e7f6a8701e7831" Jan 27 13:43:00 crc kubenswrapper[4786]: I0127 13:43:00.904519 4786 scope.go:117] "RemoveContainer" containerID="73f055031c91c0e12e194e0d68406eda35e16291a14a4d3c558be1c9ff4b3cc2" Jan 27 13:43:12 crc kubenswrapper[4786]: I0127 13:43:12.465269 4786 scope.go:117] "RemoveContainer" containerID="940bc77603b9c5b3db3920682c34b52841d1a8d1220862773ac52a4bd82cfebe" Jan 27 13:43:12 crc kubenswrapper[4786]: E0127 13:43:12.466120 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.195866 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2wtz4"] Jan 27 13:43:20 crc kubenswrapper[4786]: E0127 13:43:20.197942 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="563764d9-352e-45a8-9f42-895636610cf4" containerName="registry-server" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.198043 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="563764d9-352e-45a8-9f42-895636610cf4" containerName="registry-server" Jan 27 13:43:20 crc kubenswrapper[4786]: E0127 13:43:20.198122 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="563764d9-352e-45a8-9f42-895636610cf4" containerName="extract-utilities" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.198191 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="563764d9-352e-45a8-9f42-895636610cf4" containerName="extract-utilities" Jan 27 13:43:20 crc kubenswrapper[4786]: E0127 13:43:20.198269 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="563764d9-352e-45a8-9f42-895636610cf4" containerName="extract-content" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.198341 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="563764d9-352e-45a8-9f42-895636610cf4" containerName="extract-content" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.198682 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="563764d9-352e-45a8-9f42-895636610cf4" containerName="registry-server" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.200127 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.207796 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wtz4"] Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.374109 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa9735e3-4092-445a-a61c-8c17e1533def-catalog-content\") pod \"redhat-marketplace-2wtz4\" (UID: \"fa9735e3-4092-445a-a61c-8c17e1533def\") " pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.374162 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa9735e3-4092-445a-a61c-8c17e1533def-utilities\") pod \"redhat-marketplace-2wtz4\" (UID: \"fa9735e3-4092-445a-a61c-8c17e1533def\") " pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.374233 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmtsl\" (UniqueName: \"kubernetes.io/projected/fa9735e3-4092-445a-a61c-8c17e1533def-kube-api-access-rmtsl\") pod \"redhat-marketplace-2wtz4\" (UID: \"fa9735e3-4092-445a-a61c-8c17e1533def\") " pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.475453 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa9735e3-4092-445a-a61c-8c17e1533def-catalog-content\") pod \"redhat-marketplace-2wtz4\" (UID: \"fa9735e3-4092-445a-a61c-8c17e1533def\") " pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.475503 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa9735e3-4092-445a-a61c-8c17e1533def-utilities\") pod \"redhat-marketplace-2wtz4\" (UID: \"fa9735e3-4092-445a-a61c-8c17e1533def\") " pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.475581 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmtsl\" (UniqueName: \"kubernetes.io/projected/fa9735e3-4092-445a-a61c-8c17e1533def-kube-api-access-rmtsl\") pod \"redhat-marketplace-2wtz4\" (UID: \"fa9735e3-4092-445a-a61c-8c17e1533def\") " pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.476103 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa9735e3-4092-445a-a61c-8c17e1533def-utilities\") pod \"redhat-marketplace-2wtz4\" (UID: \"fa9735e3-4092-445a-a61c-8c17e1533def\") " pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.476235 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa9735e3-4092-445a-a61c-8c17e1533def-catalog-content\") pod \"redhat-marketplace-2wtz4\" (UID: \"fa9735e3-4092-445a-a61c-8c17e1533def\") " pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.493197 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmtsl\" (UniqueName: \"kubernetes.io/projected/fa9735e3-4092-445a-a61c-8c17e1533def-kube-api-access-rmtsl\") pod \"redhat-marketplace-2wtz4\" (UID: \"fa9735e3-4092-445a-a61c-8c17e1533def\") " pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.517169 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:20 crc kubenswrapper[4786]: I0127 13:43:20.968764 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wtz4"] Jan 27 13:43:20 crc kubenswrapper[4786]: W0127 13:43:20.969819 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa9735e3_4092_445a_a61c_8c17e1533def.slice/crio-596f0757f53eb096d64c925006a4d611478aeedfff93dfbb8988157d82b321ce WatchSource:0}: Error finding container 596f0757f53eb096d64c925006a4d611478aeedfff93dfbb8988157d82b321ce: Status 404 returned error can't find the container with id 596f0757f53eb096d64c925006a4d611478aeedfff93dfbb8988157d82b321ce Jan 27 13:43:21 crc kubenswrapper[4786]: I0127 13:43:21.545154 4786 generic.go:334] "Generic (PLEG): container finished" podID="fa9735e3-4092-445a-a61c-8c17e1533def" containerID="59b7d2825a06429de5b517594ec669ec3d3dd871f8ce37be173084578f12e0cc" exitCode=0 Jan 27 13:43:21 crc kubenswrapper[4786]: I0127 13:43:21.545242 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wtz4" event={"ID":"fa9735e3-4092-445a-a61c-8c17e1533def","Type":"ContainerDied","Data":"59b7d2825a06429de5b517594ec669ec3d3dd871f8ce37be173084578f12e0cc"} Jan 27 13:43:21 crc kubenswrapper[4786]: I0127 13:43:21.545490 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wtz4" event={"ID":"fa9735e3-4092-445a-a61c-8c17e1533def","Type":"ContainerStarted","Data":"596f0757f53eb096d64c925006a4d611478aeedfff93dfbb8988157d82b321ce"} Jan 27 13:43:21 crc kubenswrapper[4786]: I0127 13:43:21.548734 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 13:43:22 crc kubenswrapper[4786]: I0127 13:43:22.555225 4786 generic.go:334] "Generic (PLEG): container finished" podID="fa9735e3-4092-445a-a61c-8c17e1533def" containerID="e9cb9ef0f977e00aa289c8695cb4bf34f203b9b4767e0a952bcd377ef6e2871c" exitCode=0 Jan 27 13:43:22 crc kubenswrapper[4786]: I0127 13:43:22.555310 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wtz4" event={"ID":"fa9735e3-4092-445a-a61c-8c17e1533def","Type":"ContainerDied","Data":"e9cb9ef0f977e00aa289c8695cb4bf34f203b9b4767e0a952bcd377ef6e2871c"} Jan 27 13:43:23 crc kubenswrapper[4786]: I0127 13:43:23.465816 4786 scope.go:117] "RemoveContainer" containerID="940bc77603b9c5b3db3920682c34b52841d1a8d1220862773ac52a4bd82cfebe" Jan 27 13:43:23 crc kubenswrapper[4786]: I0127 13:43:23.568342 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wtz4" event={"ID":"fa9735e3-4092-445a-a61c-8c17e1533def","Type":"ContainerStarted","Data":"7360529b9cef17d0ebeb0b31c28e25dca5b9c30a9ea5a12c9d1dd4b4d890f8f2"} Jan 27 13:43:23 crc kubenswrapper[4786]: I0127 13:43:23.594432 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2wtz4" podStartSLOduration=2.09107376 podStartE2EDuration="3.594412903s" podCreationTimestamp="2026-01-27 13:43:20 +0000 UTC" firstStartedPulling="2026-01-27 13:43:21.548479479 +0000 UTC m=+2184.759093598" lastFinishedPulling="2026-01-27 13:43:23.051818622 +0000 UTC m=+2186.262432741" observedRunningTime="2026-01-27 13:43:23.586688933 +0000 UTC m=+2186.797303082" watchObservedRunningTime="2026-01-27 13:43:23.594412903 +0000 UTC m=+2186.805027022" Jan 27 13:43:24 crc kubenswrapper[4786]: I0127 13:43:24.578910 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" event={"ID":"89580574-f031-4b88-98e5-18075ffa20c6","Type":"ContainerStarted","Data":"212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637"} Jan 27 13:43:28 crc kubenswrapper[4786]: I0127 13:43:28.613494 4786 generic.go:334] "Generic (PLEG): container finished" podID="89580574-f031-4b88-98e5-18075ffa20c6" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" exitCode=2 Jan 27 13:43:28 crc kubenswrapper[4786]: I0127 13:43:28.613568 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" event={"ID":"89580574-f031-4b88-98e5-18075ffa20c6","Type":"ContainerDied","Data":"212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637"} Jan 27 13:43:28 crc kubenswrapper[4786]: I0127 13:43:28.613880 4786 scope.go:117] "RemoveContainer" containerID="940bc77603b9c5b3db3920682c34b52841d1a8d1220862773ac52a4bd82cfebe" Jan 27 13:43:28 crc kubenswrapper[4786]: I0127 13:43:28.614460 4786 scope.go:117] "RemoveContainer" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:43:28 crc kubenswrapper[4786]: E0127 13:43:28.614694 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:43:30 crc kubenswrapper[4786]: I0127 13:43:30.517851 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:30 crc kubenswrapper[4786]: I0127 13:43:30.517999 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:30 crc kubenswrapper[4786]: I0127 13:43:30.567355 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:30 crc kubenswrapper[4786]: I0127 13:43:30.671668 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:30 crc kubenswrapper[4786]: I0127 13:43:30.800745 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wtz4"] Jan 27 13:43:32 crc kubenswrapper[4786]: I0127 13:43:32.650089 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2wtz4" podUID="fa9735e3-4092-445a-a61c-8c17e1533def" containerName="registry-server" containerID="cri-o://7360529b9cef17d0ebeb0b31c28e25dca5b9c30a9ea5a12c9d1dd4b4d890f8f2" gracePeriod=2 Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.065528 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.179757 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa9735e3-4092-445a-a61c-8c17e1533def-utilities\") pod \"fa9735e3-4092-445a-a61c-8c17e1533def\" (UID: \"fa9735e3-4092-445a-a61c-8c17e1533def\") " Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.179807 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa9735e3-4092-445a-a61c-8c17e1533def-catalog-content\") pod \"fa9735e3-4092-445a-a61c-8c17e1533def\" (UID: \"fa9735e3-4092-445a-a61c-8c17e1533def\") " Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.179918 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmtsl\" (UniqueName: \"kubernetes.io/projected/fa9735e3-4092-445a-a61c-8c17e1533def-kube-api-access-rmtsl\") pod \"fa9735e3-4092-445a-a61c-8c17e1533def\" (UID: \"fa9735e3-4092-445a-a61c-8c17e1533def\") " Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.184248 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa9735e3-4092-445a-a61c-8c17e1533def-utilities" (OuterVolumeSpecName: "utilities") pod "fa9735e3-4092-445a-a61c-8c17e1533def" (UID: "fa9735e3-4092-445a-a61c-8c17e1533def"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.187052 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa9735e3-4092-445a-a61c-8c17e1533def-kube-api-access-rmtsl" (OuterVolumeSpecName: "kube-api-access-rmtsl") pod "fa9735e3-4092-445a-a61c-8c17e1533def" (UID: "fa9735e3-4092-445a-a61c-8c17e1533def"). InnerVolumeSpecName "kube-api-access-rmtsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.203663 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa9735e3-4092-445a-a61c-8c17e1533def-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa9735e3-4092-445a-a61c-8c17e1533def" (UID: "fa9735e3-4092-445a-a61c-8c17e1533def"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.283142 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa9735e3-4092-445a-a61c-8c17e1533def-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.283217 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa9735e3-4092-445a-a61c-8c17e1533def-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.283244 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmtsl\" (UniqueName: \"kubernetes.io/projected/fa9735e3-4092-445a-a61c-8c17e1533def-kube-api-access-rmtsl\") on node \"crc\" DevicePath \"\"" Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.662035 4786 generic.go:334] "Generic (PLEG): container finished" podID="fa9735e3-4092-445a-a61c-8c17e1533def" containerID="7360529b9cef17d0ebeb0b31c28e25dca5b9c30a9ea5a12c9d1dd4b4d890f8f2" exitCode=0 Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.662143 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2wtz4" Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.662119 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wtz4" event={"ID":"fa9735e3-4092-445a-a61c-8c17e1533def","Type":"ContainerDied","Data":"7360529b9cef17d0ebeb0b31c28e25dca5b9c30a9ea5a12c9d1dd4b4d890f8f2"} Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.662236 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wtz4" event={"ID":"fa9735e3-4092-445a-a61c-8c17e1533def","Type":"ContainerDied","Data":"596f0757f53eb096d64c925006a4d611478aeedfff93dfbb8988157d82b321ce"} Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.662461 4786 scope.go:117] "RemoveContainer" containerID="7360529b9cef17d0ebeb0b31c28e25dca5b9c30a9ea5a12c9d1dd4b4d890f8f2" Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.689076 4786 scope.go:117] "RemoveContainer" containerID="e9cb9ef0f977e00aa289c8695cb4bf34f203b9b4767e0a952bcd377ef6e2871c" Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.690314 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wtz4"] Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.699316 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wtz4"] Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.705619 4786 scope.go:117] "RemoveContainer" containerID="59b7d2825a06429de5b517594ec669ec3d3dd871f8ce37be173084578f12e0cc" Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.739687 4786 scope.go:117] "RemoveContainer" containerID="7360529b9cef17d0ebeb0b31c28e25dca5b9c30a9ea5a12c9d1dd4b4d890f8f2" Jan 27 13:43:33 crc kubenswrapper[4786]: E0127 13:43:33.740292 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7360529b9cef17d0ebeb0b31c28e25dca5b9c30a9ea5a12c9d1dd4b4d890f8f2\": container with ID starting with 7360529b9cef17d0ebeb0b31c28e25dca5b9c30a9ea5a12c9d1dd4b4d890f8f2 not found: ID does not exist" containerID="7360529b9cef17d0ebeb0b31c28e25dca5b9c30a9ea5a12c9d1dd4b4d890f8f2" Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.740346 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7360529b9cef17d0ebeb0b31c28e25dca5b9c30a9ea5a12c9d1dd4b4d890f8f2"} err="failed to get container status \"7360529b9cef17d0ebeb0b31c28e25dca5b9c30a9ea5a12c9d1dd4b4d890f8f2\": rpc error: code = NotFound desc = could not find container \"7360529b9cef17d0ebeb0b31c28e25dca5b9c30a9ea5a12c9d1dd4b4d890f8f2\": container with ID starting with 7360529b9cef17d0ebeb0b31c28e25dca5b9c30a9ea5a12c9d1dd4b4d890f8f2 not found: ID does not exist" Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.740376 4786 scope.go:117] "RemoveContainer" containerID="e9cb9ef0f977e00aa289c8695cb4bf34f203b9b4767e0a952bcd377ef6e2871c" Jan 27 13:43:33 crc kubenswrapper[4786]: E0127 13:43:33.740717 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9cb9ef0f977e00aa289c8695cb4bf34f203b9b4767e0a952bcd377ef6e2871c\": container with ID starting with e9cb9ef0f977e00aa289c8695cb4bf34f203b9b4767e0a952bcd377ef6e2871c not found: ID does not exist" containerID="e9cb9ef0f977e00aa289c8695cb4bf34f203b9b4767e0a952bcd377ef6e2871c" Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.740748 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9cb9ef0f977e00aa289c8695cb4bf34f203b9b4767e0a952bcd377ef6e2871c"} err="failed to get container status \"e9cb9ef0f977e00aa289c8695cb4bf34f203b9b4767e0a952bcd377ef6e2871c\": rpc error: code = NotFound desc = could not find container \"e9cb9ef0f977e00aa289c8695cb4bf34f203b9b4767e0a952bcd377ef6e2871c\": container with ID starting with e9cb9ef0f977e00aa289c8695cb4bf34f203b9b4767e0a952bcd377ef6e2871c not found: ID does not exist" Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.740766 4786 scope.go:117] "RemoveContainer" containerID="59b7d2825a06429de5b517594ec669ec3d3dd871f8ce37be173084578f12e0cc" Jan 27 13:43:33 crc kubenswrapper[4786]: E0127 13:43:33.741149 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59b7d2825a06429de5b517594ec669ec3d3dd871f8ce37be173084578f12e0cc\": container with ID starting with 59b7d2825a06429de5b517594ec669ec3d3dd871f8ce37be173084578f12e0cc not found: ID does not exist" containerID="59b7d2825a06429de5b517594ec669ec3d3dd871f8ce37be173084578f12e0cc" Jan 27 13:43:33 crc kubenswrapper[4786]: I0127 13:43:33.741238 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59b7d2825a06429de5b517594ec669ec3d3dd871f8ce37be173084578f12e0cc"} err="failed to get container status \"59b7d2825a06429de5b517594ec669ec3d3dd871f8ce37be173084578f12e0cc\": rpc error: code = NotFound desc = could not find container \"59b7d2825a06429de5b517594ec669ec3d3dd871f8ce37be173084578f12e0cc\": container with ID starting with 59b7d2825a06429de5b517594ec669ec3d3dd871f8ce37be173084578f12e0cc not found: ID does not exist" Jan 27 13:43:35 crc kubenswrapper[4786]: I0127 13:43:35.475468 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa9735e3-4092-445a-a61c-8c17e1533def" path="/var/lib/kubelet/pods/fa9735e3-4092-445a-a61c-8c17e1533def/volumes" Jan 27 13:43:41 crc kubenswrapper[4786]: I0127 13:43:41.466426 4786 scope.go:117] "RemoveContainer" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:43:41 crc kubenswrapper[4786]: E0127 13:43:41.467798 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:43:56 crc kubenswrapper[4786]: I0127 13:43:56.465592 4786 scope.go:117] "RemoveContainer" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:43:56 crc kubenswrapper[4786]: E0127 13:43:56.466754 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:44:01 crc kubenswrapper[4786]: I0127 13:44:01.065383 4786 scope.go:117] "RemoveContainer" containerID="be632094cabe357e076c0316ba3c28d193938661eaf06b02fbb9ba8532faee2d" Jan 27 13:44:01 crc kubenswrapper[4786]: I0127 13:44:01.117828 4786 scope.go:117] "RemoveContainer" containerID="fe611adf5e7e8af4eeb6f04c0923f019f55e5710efb57e7d31e4ce1f3dc6b951" Jan 27 13:44:01 crc kubenswrapper[4786]: I0127 13:44:01.138335 4786 scope.go:117] "RemoveContainer" containerID="9207e066c51944cff29e83b28af6f19b1a9afcb3a7a370da0790d8269def2295" Jan 27 13:44:01 crc kubenswrapper[4786]: I0127 13:44:01.171980 4786 scope.go:117] "RemoveContainer" containerID="0e6a8752f4314f3e4f2a1269aa8e2d4a0b2f48b7784f3ab7a6d3541ad48948f5" Jan 27 13:44:01 crc kubenswrapper[4786]: I0127 13:44:01.215137 4786 scope.go:117] "RemoveContainer" containerID="e745c5cf9f468ed80350d57344f8d7471e7c9d7f631645dcfe93d3523f9e8daf" Jan 27 13:44:08 crc kubenswrapper[4786]: I0127 13:44:08.464907 4786 scope.go:117] "RemoveContainer" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:44:08 crc kubenswrapper[4786]: E0127 13:44:08.466874 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:44:09 crc kubenswrapper[4786]: I0127 13:44:09.532995 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:44:09 crc kubenswrapper[4786]: I0127 13:44:09.533056 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:44:20 crc kubenswrapper[4786]: I0127 13:44:20.464785 4786 scope.go:117] "RemoveContainer" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:44:20 crc kubenswrapper[4786]: E0127 13:44:20.465449 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:44:31 crc kubenswrapper[4786]: I0127 13:44:31.464817 4786 scope.go:117] "RemoveContainer" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:44:31 crc kubenswrapper[4786]: E0127 13:44:31.465569 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.638332 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cp2wj"] Jan 27 13:44:34 crc kubenswrapper[4786]: E0127 13:44:34.639022 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9735e3-4092-445a-a61c-8c17e1533def" containerName="extract-content" Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.639036 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9735e3-4092-445a-a61c-8c17e1533def" containerName="extract-content" Jan 27 13:44:34 crc kubenswrapper[4786]: E0127 13:44:34.639052 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9735e3-4092-445a-a61c-8c17e1533def" containerName="registry-server" Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.639058 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9735e3-4092-445a-a61c-8c17e1533def" containerName="registry-server" Jan 27 13:44:34 crc kubenswrapper[4786]: E0127 13:44:34.639081 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9735e3-4092-445a-a61c-8c17e1533def" containerName="extract-utilities" Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.639089 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9735e3-4092-445a-a61c-8c17e1533def" containerName="extract-utilities" Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.639254 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9735e3-4092-445a-a61c-8c17e1533def" containerName="registry-server" Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.640643 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.662307 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cp2wj"] Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.708294 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksggq\" (UniqueName: \"kubernetes.io/projected/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-kube-api-access-ksggq\") pod \"community-operators-cp2wj\" (UID: \"fcccce0a-e1ed-4cc6-abc9-faaa827e133d\") " pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.708364 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-catalog-content\") pod \"community-operators-cp2wj\" (UID: \"fcccce0a-e1ed-4cc6-abc9-faaa827e133d\") " pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.708398 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-utilities\") pod \"community-operators-cp2wj\" (UID: \"fcccce0a-e1ed-4cc6-abc9-faaa827e133d\") " pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.809706 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksggq\" (UniqueName: \"kubernetes.io/projected/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-kube-api-access-ksggq\") pod \"community-operators-cp2wj\" (UID: \"fcccce0a-e1ed-4cc6-abc9-faaa827e133d\") " pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.809788 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-catalog-content\") pod \"community-operators-cp2wj\" (UID: \"fcccce0a-e1ed-4cc6-abc9-faaa827e133d\") " pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.809827 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-utilities\") pod \"community-operators-cp2wj\" (UID: \"fcccce0a-e1ed-4cc6-abc9-faaa827e133d\") " pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.810659 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-catalog-content\") pod \"community-operators-cp2wj\" (UID: \"fcccce0a-e1ed-4cc6-abc9-faaa827e133d\") " pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.810705 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-utilities\") pod \"community-operators-cp2wj\" (UID: \"fcccce0a-e1ed-4cc6-abc9-faaa827e133d\") " pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.836229 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksggq\" (UniqueName: \"kubernetes.io/projected/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-kube-api-access-ksggq\") pod \"community-operators-cp2wj\" (UID: \"fcccce0a-e1ed-4cc6-abc9-faaa827e133d\") " pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:34 crc kubenswrapper[4786]: I0127 13:44:34.959729 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:35 crc kubenswrapper[4786]: I0127 13:44:35.481356 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cp2wj"] Jan 27 13:44:36 crc kubenswrapper[4786]: I0127 13:44:36.186843 4786 generic.go:334] "Generic (PLEG): container finished" podID="fcccce0a-e1ed-4cc6-abc9-faaa827e133d" containerID="29abd1e66cc5e8613cc06f6269682681d2425ac2898586a077132eac3b354d99" exitCode=0 Jan 27 13:44:36 crc kubenswrapper[4786]: I0127 13:44:36.186991 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cp2wj" event={"ID":"fcccce0a-e1ed-4cc6-abc9-faaa827e133d","Type":"ContainerDied","Data":"29abd1e66cc5e8613cc06f6269682681d2425ac2898586a077132eac3b354d99"} Jan 27 13:44:36 crc kubenswrapper[4786]: I0127 13:44:36.187209 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cp2wj" event={"ID":"fcccce0a-e1ed-4cc6-abc9-faaa827e133d","Type":"ContainerStarted","Data":"479bde14825db9a83ecf61ad2b14447668c55282b28ce9d3b040294c21f5c93a"} Jan 27 13:44:37 crc kubenswrapper[4786]: I0127 13:44:37.039933 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-88xq9"] Jan 27 13:44:37 crc kubenswrapper[4786]: I0127 13:44:37.042221 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:37 crc kubenswrapper[4786]: I0127 13:44:37.051781 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-88xq9"] Jan 27 13:44:37 crc kubenswrapper[4786]: I0127 13:44:37.146551 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-utilities\") pod \"certified-operators-88xq9\" (UID: \"6d2ec652-7293-46bc-b9ab-3225fbc43fb0\") " pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:37 crc kubenswrapper[4786]: I0127 13:44:37.146680 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-catalog-content\") pod \"certified-operators-88xq9\" (UID: \"6d2ec652-7293-46bc-b9ab-3225fbc43fb0\") " pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:37 crc kubenswrapper[4786]: I0127 13:44:37.146732 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l48sn\" (UniqueName: \"kubernetes.io/projected/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-kube-api-access-l48sn\") pod \"certified-operators-88xq9\" (UID: \"6d2ec652-7293-46bc-b9ab-3225fbc43fb0\") " pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:37 crc kubenswrapper[4786]: I0127 13:44:37.248220 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-utilities\") pod \"certified-operators-88xq9\" (UID: \"6d2ec652-7293-46bc-b9ab-3225fbc43fb0\") " pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:37 crc kubenswrapper[4786]: I0127 13:44:37.248361 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-catalog-content\") pod \"certified-operators-88xq9\" (UID: \"6d2ec652-7293-46bc-b9ab-3225fbc43fb0\") " pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:37 crc kubenswrapper[4786]: I0127 13:44:37.248412 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l48sn\" (UniqueName: \"kubernetes.io/projected/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-kube-api-access-l48sn\") pod \"certified-operators-88xq9\" (UID: \"6d2ec652-7293-46bc-b9ab-3225fbc43fb0\") " pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:37 crc kubenswrapper[4786]: I0127 13:44:37.249254 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-utilities\") pod \"certified-operators-88xq9\" (UID: \"6d2ec652-7293-46bc-b9ab-3225fbc43fb0\") " pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:37 crc kubenswrapper[4786]: I0127 13:44:37.249368 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-catalog-content\") pod \"certified-operators-88xq9\" (UID: \"6d2ec652-7293-46bc-b9ab-3225fbc43fb0\") " pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:37 crc kubenswrapper[4786]: I0127 13:44:37.274913 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l48sn\" (UniqueName: \"kubernetes.io/projected/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-kube-api-access-l48sn\") pod \"certified-operators-88xq9\" (UID: \"6d2ec652-7293-46bc-b9ab-3225fbc43fb0\") " pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:37 crc kubenswrapper[4786]: I0127 13:44:37.392800 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:38 crc kubenswrapper[4786]: I0127 13:44:38.006116 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-88xq9"] Jan 27 13:44:38 crc kubenswrapper[4786]: W0127 13:44:38.012860 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d2ec652_7293_46bc_b9ab_3225fbc43fb0.slice/crio-6e8411071d5c73a7f5a4a4c09f063db1c21a747d14096c6c1b59d303ed1c2c85 WatchSource:0}: Error finding container 6e8411071d5c73a7f5a4a4c09f063db1c21a747d14096c6c1b59d303ed1c2c85: Status 404 returned error can't find the container with id 6e8411071d5c73a7f5a4a4c09f063db1c21a747d14096c6c1b59d303ed1c2c85 Jan 27 13:44:38 crc kubenswrapper[4786]: I0127 13:44:38.203693 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88xq9" event={"ID":"6d2ec652-7293-46bc-b9ab-3225fbc43fb0","Type":"ContainerStarted","Data":"6e8411071d5c73a7f5a4a4c09f063db1c21a747d14096c6c1b59d303ed1c2c85"} Jan 27 13:44:38 crc kubenswrapper[4786]: I0127 13:44:38.206252 4786 generic.go:334] "Generic (PLEG): container finished" podID="fcccce0a-e1ed-4cc6-abc9-faaa827e133d" containerID="2890c6ccef7a271aa9b6cb6368c1d194d5fb401dece9e8a9faf54ae2be74474d" exitCode=0 Jan 27 13:44:38 crc kubenswrapper[4786]: I0127 13:44:38.206304 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cp2wj" event={"ID":"fcccce0a-e1ed-4cc6-abc9-faaa827e133d","Type":"ContainerDied","Data":"2890c6ccef7a271aa9b6cb6368c1d194d5fb401dece9e8a9faf54ae2be74474d"} Jan 27 13:44:39 crc kubenswrapper[4786]: I0127 13:44:39.217065 4786 generic.go:334] "Generic (PLEG): container finished" podID="6d2ec652-7293-46bc-b9ab-3225fbc43fb0" containerID="7893fa04fd152f296ff2b67cfa21aed1be51e2c5d097602025889016c19aa6ce" exitCode=0 Jan 27 13:44:39 crc kubenswrapper[4786]: I0127 13:44:39.217146 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88xq9" event={"ID":"6d2ec652-7293-46bc-b9ab-3225fbc43fb0","Type":"ContainerDied","Data":"7893fa04fd152f296ff2b67cfa21aed1be51e2c5d097602025889016c19aa6ce"} Jan 27 13:44:39 crc kubenswrapper[4786]: I0127 13:44:39.533262 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:44:39 crc kubenswrapper[4786]: I0127 13:44:39.533633 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:44:40 crc kubenswrapper[4786]: I0127 13:44:40.225639 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cp2wj" event={"ID":"fcccce0a-e1ed-4cc6-abc9-faaa827e133d","Type":"ContainerStarted","Data":"1211a77c47c53075f41ce72f14adffe5901f6314bd78dcb76bd61795554adfab"} Jan 27 13:44:40 crc kubenswrapper[4786]: I0127 13:44:40.247310 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cp2wj" podStartSLOduration=3.296646533 podStartE2EDuration="6.247288624s" podCreationTimestamp="2026-01-27 13:44:34 +0000 UTC" firstStartedPulling="2026-01-27 13:44:36.189040901 +0000 UTC m=+2259.399655020" lastFinishedPulling="2026-01-27 13:44:39.139682992 +0000 UTC m=+2262.350297111" observedRunningTime="2026-01-27 13:44:40.242051552 +0000 UTC m=+2263.452665681" watchObservedRunningTime="2026-01-27 13:44:40.247288624 +0000 UTC m=+2263.457902743" Jan 27 13:44:41 crc kubenswrapper[4786]: I0127 13:44:41.236153 4786 generic.go:334] "Generic (PLEG): container finished" podID="6d2ec652-7293-46bc-b9ab-3225fbc43fb0" containerID="86e964e88befe9c9db0d7aa62bc1a96c957813f0a434cc8ddc1bb71ec01ea7fd" exitCode=0 Jan 27 13:44:41 crc kubenswrapper[4786]: I0127 13:44:41.237886 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88xq9" event={"ID":"6d2ec652-7293-46bc-b9ab-3225fbc43fb0","Type":"ContainerDied","Data":"86e964e88befe9c9db0d7aa62bc1a96c957813f0a434cc8ddc1bb71ec01ea7fd"} Jan 27 13:44:42 crc kubenswrapper[4786]: I0127 13:44:42.248701 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88xq9" event={"ID":"6d2ec652-7293-46bc-b9ab-3225fbc43fb0","Type":"ContainerStarted","Data":"4a06f18c44037eacb838b8c112495b43dedd1cc5150977378f555e3c895393e5"} Jan 27 13:44:42 crc kubenswrapper[4786]: I0127 13:44:42.266818 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-88xq9" podStartSLOduration=2.699242343 podStartE2EDuration="5.266797627s" podCreationTimestamp="2026-01-27 13:44:37 +0000 UTC" firstStartedPulling="2026-01-27 13:44:39.218585741 +0000 UTC m=+2262.429199860" lastFinishedPulling="2026-01-27 13:44:41.786141025 +0000 UTC m=+2264.996755144" observedRunningTime="2026-01-27 13:44:42.263384365 +0000 UTC m=+2265.473998494" watchObservedRunningTime="2026-01-27 13:44:42.266797627 +0000 UTC m=+2265.477411746" Jan 27 13:44:44 crc kubenswrapper[4786]: I0127 13:44:44.466644 4786 scope.go:117] "RemoveContainer" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:44:44 crc kubenswrapper[4786]: E0127 13:44:44.467274 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:44:44 crc kubenswrapper[4786]: I0127 13:44:44.960110 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:44 crc kubenswrapper[4786]: I0127 13:44:44.960391 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:45 crc kubenswrapper[4786]: I0127 13:44:45.007308 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:45 crc kubenswrapper[4786]: I0127 13:44:45.311584 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:47 crc kubenswrapper[4786]: I0127 13:44:47.034358 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cp2wj"] Jan 27 13:44:47 crc kubenswrapper[4786]: I0127 13:44:47.393084 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:47 crc kubenswrapper[4786]: I0127 13:44:47.393382 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:47 crc kubenswrapper[4786]: I0127 13:44:47.436898 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:48 crc kubenswrapper[4786]: I0127 13:44:48.037158 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cp2wj" podUID="fcccce0a-e1ed-4cc6-abc9-faaa827e133d" containerName="registry-server" containerID="cri-o://1211a77c47c53075f41ce72f14adffe5901f6314bd78dcb76bd61795554adfab" gracePeriod=2 Jan 27 13:44:48 crc kubenswrapper[4786]: I0127 13:44:48.083297 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:48 crc kubenswrapper[4786]: I0127 13:44:48.426402 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-88xq9"] Jan 27 13:44:49 crc kubenswrapper[4786]: I0127 13:44:49.050155 4786 generic.go:334] "Generic (PLEG): container finished" podID="fcccce0a-e1ed-4cc6-abc9-faaa827e133d" containerID="1211a77c47c53075f41ce72f14adffe5901f6314bd78dcb76bd61795554adfab" exitCode=0 Jan 27 13:44:49 crc kubenswrapper[4786]: I0127 13:44:49.050232 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cp2wj" event={"ID":"fcccce0a-e1ed-4cc6-abc9-faaa827e133d","Type":"ContainerDied","Data":"1211a77c47c53075f41ce72f14adffe5901f6314bd78dcb76bd61795554adfab"} Jan 27 13:44:49 crc kubenswrapper[4786]: I0127 13:44:49.659059 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:49 crc kubenswrapper[4786]: I0127 13:44:49.788991 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-utilities\") pod \"fcccce0a-e1ed-4cc6-abc9-faaa827e133d\" (UID: \"fcccce0a-e1ed-4cc6-abc9-faaa827e133d\") " Jan 27 13:44:49 crc kubenswrapper[4786]: I0127 13:44:49.789370 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-catalog-content\") pod \"fcccce0a-e1ed-4cc6-abc9-faaa827e133d\" (UID: \"fcccce0a-e1ed-4cc6-abc9-faaa827e133d\") " Jan 27 13:44:49 crc kubenswrapper[4786]: I0127 13:44:49.789469 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksggq\" (UniqueName: \"kubernetes.io/projected/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-kube-api-access-ksggq\") pod \"fcccce0a-e1ed-4cc6-abc9-faaa827e133d\" (UID: \"fcccce0a-e1ed-4cc6-abc9-faaa827e133d\") " Jan 27 13:44:49 crc kubenswrapper[4786]: I0127 13:44:49.790121 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-utilities" (OuterVolumeSpecName: "utilities") pod "fcccce0a-e1ed-4cc6-abc9-faaa827e133d" (UID: "fcccce0a-e1ed-4cc6-abc9-faaa827e133d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:44:49 crc kubenswrapper[4786]: I0127 13:44:49.798230 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-kube-api-access-ksggq" (OuterVolumeSpecName: "kube-api-access-ksggq") pod "fcccce0a-e1ed-4cc6-abc9-faaa827e133d" (UID: "fcccce0a-e1ed-4cc6-abc9-faaa827e133d"). InnerVolumeSpecName "kube-api-access-ksggq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:49 crc kubenswrapper[4786]: I0127 13:44:49.843104 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fcccce0a-e1ed-4cc6-abc9-faaa827e133d" (UID: "fcccce0a-e1ed-4cc6-abc9-faaa827e133d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:44:49 crc kubenswrapper[4786]: I0127 13:44:49.892576 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:49 crc kubenswrapper[4786]: I0127 13:44:49.892672 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:49 crc kubenswrapper[4786]: I0127 13:44:49.892697 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksggq\" (UniqueName: \"kubernetes.io/projected/fcccce0a-e1ed-4cc6-abc9-faaa827e133d-kube-api-access-ksggq\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:50 crc kubenswrapper[4786]: I0127 13:44:50.077459 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cp2wj" Jan 27 13:44:50 crc kubenswrapper[4786]: I0127 13:44:50.077553 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-88xq9" podUID="6d2ec652-7293-46bc-b9ab-3225fbc43fb0" containerName="registry-server" containerID="cri-o://4a06f18c44037eacb838b8c112495b43dedd1cc5150977378f555e3c895393e5" gracePeriod=2 Jan 27 13:44:50 crc kubenswrapper[4786]: I0127 13:44:50.077436 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cp2wj" event={"ID":"fcccce0a-e1ed-4cc6-abc9-faaa827e133d","Type":"ContainerDied","Data":"479bde14825db9a83ecf61ad2b14447668c55282b28ce9d3b040294c21f5c93a"} Jan 27 13:44:50 crc kubenswrapper[4786]: I0127 13:44:50.077829 4786 scope.go:117] "RemoveContainer" containerID="1211a77c47c53075f41ce72f14adffe5901f6314bd78dcb76bd61795554adfab" Jan 27 13:44:50 crc kubenswrapper[4786]: I0127 13:44:50.106299 4786 scope.go:117] "RemoveContainer" containerID="2890c6ccef7a271aa9b6cb6368c1d194d5fb401dece9e8a9faf54ae2be74474d" Jan 27 13:44:50 crc kubenswrapper[4786]: I0127 13:44:50.112049 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cp2wj"] Jan 27 13:44:50 crc kubenswrapper[4786]: I0127 13:44:50.118139 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cp2wj"] Jan 27 13:44:50 crc kubenswrapper[4786]: I0127 13:44:50.131194 4786 scope.go:117] "RemoveContainer" containerID="29abd1e66cc5e8613cc06f6269682681d2425ac2898586a077132eac3b354d99" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.023503 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.095084 4786 generic.go:334] "Generic (PLEG): container finished" podID="6d2ec652-7293-46bc-b9ab-3225fbc43fb0" containerID="4a06f18c44037eacb838b8c112495b43dedd1cc5150977378f555e3c895393e5" exitCode=0 Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.095161 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88xq9" event={"ID":"6d2ec652-7293-46bc-b9ab-3225fbc43fb0","Type":"ContainerDied","Data":"4a06f18c44037eacb838b8c112495b43dedd1cc5150977378f555e3c895393e5"} Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.095238 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88xq9" event={"ID":"6d2ec652-7293-46bc-b9ab-3225fbc43fb0","Type":"ContainerDied","Data":"6e8411071d5c73a7f5a4a4c09f063db1c21a747d14096c6c1b59d303ed1c2c85"} Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.095266 4786 scope.go:117] "RemoveContainer" containerID="4a06f18c44037eacb838b8c112495b43dedd1cc5150977378f555e3c895393e5" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.095189 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88xq9" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.110518 4786 scope.go:117] "RemoveContainer" containerID="86e964e88befe9c9db0d7aa62bc1a96c957813f0a434cc8ddc1bb71ec01ea7fd" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.113126 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-utilities\") pod \"6d2ec652-7293-46bc-b9ab-3225fbc43fb0\" (UID: \"6d2ec652-7293-46bc-b9ab-3225fbc43fb0\") " Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.113162 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-catalog-content\") pod \"6d2ec652-7293-46bc-b9ab-3225fbc43fb0\" (UID: \"6d2ec652-7293-46bc-b9ab-3225fbc43fb0\") " Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.113202 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l48sn\" (UniqueName: \"kubernetes.io/projected/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-kube-api-access-l48sn\") pod \"6d2ec652-7293-46bc-b9ab-3225fbc43fb0\" (UID: \"6d2ec652-7293-46bc-b9ab-3225fbc43fb0\") " Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.115002 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-utilities" (OuterVolumeSpecName: "utilities") pod "6d2ec652-7293-46bc-b9ab-3225fbc43fb0" (UID: "6d2ec652-7293-46bc-b9ab-3225fbc43fb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.116206 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.140795 4786 scope.go:117] "RemoveContainer" containerID="7893fa04fd152f296ff2b67cfa21aed1be51e2c5d097602025889016c19aa6ce" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.144486 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-kube-api-access-l48sn" (OuterVolumeSpecName: "kube-api-access-l48sn") pod "6d2ec652-7293-46bc-b9ab-3225fbc43fb0" (UID: "6d2ec652-7293-46bc-b9ab-3225fbc43fb0"). InnerVolumeSpecName "kube-api-access-l48sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.197204 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d2ec652-7293-46bc-b9ab-3225fbc43fb0" (UID: "6d2ec652-7293-46bc-b9ab-3225fbc43fb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.208639 4786 scope.go:117] "RemoveContainer" containerID="4a06f18c44037eacb838b8c112495b43dedd1cc5150977378f555e3c895393e5" Jan 27 13:44:51 crc kubenswrapper[4786]: E0127 13:44:51.209187 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a06f18c44037eacb838b8c112495b43dedd1cc5150977378f555e3c895393e5\": container with ID starting with 4a06f18c44037eacb838b8c112495b43dedd1cc5150977378f555e3c895393e5 not found: ID does not exist" containerID="4a06f18c44037eacb838b8c112495b43dedd1cc5150977378f555e3c895393e5" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.209304 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a06f18c44037eacb838b8c112495b43dedd1cc5150977378f555e3c895393e5"} err="failed to get container status \"4a06f18c44037eacb838b8c112495b43dedd1cc5150977378f555e3c895393e5\": rpc error: code = NotFound desc = could not find container \"4a06f18c44037eacb838b8c112495b43dedd1cc5150977378f555e3c895393e5\": container with ID starting with 4a06f18c44037eacb838b8c112495b43dedd1cc5150977378f555e3c895393e5 not found: ID does not exist" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.209340 4786 scope.go:117] "RemoveContainer" containerID="86e964e88befe9c9db0d7aa62bc1a96c957813f0a434cc8ddc1bb71ec01ea7fd" Jan 27 13:44:51 crc kubenswrapper[4786]: E0127 13:44:51.209731 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e964e88befe9c9db0d7aa62bc1a96c957813f0a434cc8ddc1bb71ec01ea7fd\": container with ID starting with 86e964e88befe9c9db0d7aa62bc1a96c957813f0a434cc8ddc1bb71ec01ea7fd not found: ID does not exist" containerID="86e964e88befe9c9db0d7aa62bc1a96c957813f0a434cc8ddc1bb71ec01ea7fd" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.209802 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e964e88befe9c9db0d7aa62bc1a96c957813f0a434cc8ddc1bb71ec01ea7fd"} err="failed to get container status \"86e964e88befe9c9db0d7aa62bc1a96c957813f0a434cc8ddc1bb71ec01ea7fd\": rpc error: code = NotFound desc = could not find container \"86e964e88befe9c9db0d7aa62bc1a96c957813f0a434cc8ddc1bb71ec01ea7fd\": container with ID starting with 86e964e88befe9c9db0d7aa62bc1a96c957813f0a434cc8ddc1bb71ec01ea7fd not found: ID does not exist" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.209843 4786 scope.go:117] "RemoveContainer" containerID="7893fa04fd152f296ff2b67cfa21aed1be51e2c5d097602025889016c19aa6ce" Jan 27 13:44:51 crc kubenswrapper[4786]: E0127 13:44:51.210186 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7893fa04fd152f296ff2b67cfa21aed1be51e2c5d097602025889016c19aa6ce\": container with ID starting with 7893fa04fd152f296ff2b67cfa21aed1be51e2c5d097602025889016c19aa6ce not found: ID does not exist" containerID="7893fa04fd152f296ff2b67cfa21aed1be51e2c5d097602025889016c19aa6ce" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.210220 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7893fa04fd152f296ff2b67cfa21aed1be51e2c5d097602025889016c19aa6ce"} err="failed to get container status \"7893fa04fd152f296ff2b67cfa21aed1be51e2c5d097602025889016c19aa6ce\": rpc error: code = NotFound desc = could not find container \"7893fa04fd152f296ff2b67cfa21aed1be51e2c5d097602025889016c19aa6ce\": container with ID starting with 7893fa04fd152f296ff2b67cfa21aed1be51e2c5d097602025889016c19aa6ce not found: ID does not exist" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.217883 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.217972 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l48sn\" (UniqueName: \"kubernetes.io/projected/6d2ec652-7293-46bc-b9ab-3225fbc43fb0-kube-api-access-l48sn\") on node \"crc\" DevicePath \"\"" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.441906 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-88xq9"] Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.453096 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-88xq9"] Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.480940 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d2ec652-7293-46bc-b9ab-3225fbc43fb0" path="/var/lib/kubelet/pods/6d2ec652-7293-46bc-b9ab-3225fbc43fb0/volumes" Jan 27 13:44:51 crc kubenswrapper[4786]: I0127 13:44:51.481934 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcccce0a-e1ed-4cc6-abc9-faaa827e133d" path="/var/lib/kubelet/pods/fcccce0a-e1ed-4cc6-abc9-faaa827e133d/volumes" Jan 27 13:44:56 crc kubenswrapper[4786]: I0127 13:44:56.465355 4786 scope.go:117] "RemoveContainer" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:44:56 crc kubenswrapper[4786]: E0127 13:44:56.466452 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.153998 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m"] Jan 27 13:45:00 crc kubenswrapper[4786]: E0127 13:45:00.154704 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcccce0a-e1ed-4cc6-abc9-faaa827e133d" containerName="registry-server" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.154721 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcccce0a-e1ed-4cc6-abc9-faaa827e133d" containerName="registry-server" Jan 27 13:45:00 crc kubenswrapper[4786]: E0127 13:45:00.154749 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcccce0a-e1ed-4cc6-abc9-faaa827e133d" containerName="extract-content" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.154757 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcccce0a-e1ed-4cc6-abc9-faaa827e133d" containerName="extract-content" Jan 27 13:45:00 crc kubenswrapper[4786]: E0127 13:45:00.154775 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcccce0a-e1ed-4cc6-abc9-faaa827e133d" containerName="extract-utilities" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.154784 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcccce0a-e1ed-4cc6-abc9-faaa827e133d" containerName="extract-utilities" Jan 27 13:45:00 crc kubenswrapper[4786]: E0127 13:45:00.154797 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d2ec652-7293-46bc-b9ab-3225fbc43fb0" containerName="extract-utilities" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.154804 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d2ec652-7293-46bc-b9ab-3225fbc43fb0" containerName="extract-utilities" Jan 27 13:45:00 crc kubenswrapper[4786]: E0127 13:45:00.154815 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d2ec652-7293-46bc-b9ab-3225fbc43fb0" containerName="extract-content" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.154822 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d2ec652-7293-46bc-b9ab-3225fbc43fb0" containerName="extract-content" Jan 27 13:45:00 crc kubenswrapper[4786]: E0127 13:45:00.154844 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d2ec652-7293-46bc-b9ab-3225fbc43fb0" containerName="registry-server" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.154851 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d2ec652-7293-46bc-b9ab-3225fbc43fb0" containerName="registry-server" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.155091 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcccce0a-e1ed-4cc6-abc9-faaa827e133d" containerName="registry-server" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.155116 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d2ec652-7293-46bc-b9ab-3225fbc43fb0" containerName="registry-server" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.155846 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.161910 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.161982 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.167668 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m"] Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.294421 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-config-volume\") pod \"collect-profiles-29492025-mtp2m\" (UID: \"5a7b174a-09f3-43e6-8f54-9ee7a62761ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.294478 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-secret-volume\") pod \"collect-profiles-29492025-mtp2m\" (UID: \"5a7b174a-09f3-43e6-8f54-9ee7a62761ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.294546 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nplfj\" (UniqueName: \"kubernetes.io/projected/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-kube-api-access-nplfj\") pod \"collect-profiles-29492025-mtp2m\" (UID: \"5a7b174a-09f3-43e6-8f54-9ee7a62761ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.396479 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-config-volume\") pod \"collect-profiles-29492025-mtp2m\" (UID: \"5a7b174a-09f3-43e6-8f54-9ee7a62761ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.396911 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-secret-volume\") pod \"collect-profiles-29492025-mtp2m\" (UID: \"5a7b174a-09f3-43e6-8f54-9ee7a62761ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.397117 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nplfj\" (UniqueName: \"kubernetes.io/projected/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-kube-api-access-nplfj\") pod \"collect-profiles-29492025-mtp2m\" (UID: \"5a7b174a-09f3-43e6-8f54-9ee7a62761ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.397659 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-config-volume\") pod \"collect-profiles-29492025-mtp2m\" (UID: \"5a7b174a-09f3-43e6-8f54-9ee7a62761ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.407158 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-secret-volume\") pod \"collect-profiles-29492025-mtp2m\" (UID: \"5a7b174a-09f3-43e6-8f54-9ee7a62761ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.416378 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nplfj\" (UniqueName: \"kubernetes.io/projected/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-kube-api-access-nplfj\") pod \"collect-profiles-29492025-mtp2m\" (UID: \"5a7b174a-09f3-43e6-8f54-9ee7a62761ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.473827 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" Jan 27 13:45:00 crc kubenswrapper[4786]: I0127 13:45:00.912455 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m"] Jan 27 13:45:01 crc kubenswrapper[4786]: I0127 13:45:01.173271 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" event={"ID":"5a7b174a-09f3-43e6-8f54-9ee7a62761ed","Type":"ContainerStarted","Data":"3398cc527a001ac59395c7322ebdc33f33d6ee8aaf923dbd49a239d9510ea573"} Jan 27 13:45:01 crc kubenswrapper[4786]: I0127 13:45:01.173322 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" event={"ID":"5a7b174a-09f3-43e6-8f54-9ee7a62761ed","Type":"ContainerStarted","Data":"01d28aff0c1972c31bb3e71f156d83c817df3c694910df9b837c8cc0e32e9bbb"} Jan 27 13:45:01 crc kubenswrapper[4786]: I0127 13:45:01.317142 4786 scope.go:117] "RemoveContainer" containerID="ce3f70fb4ef78c62b242eabf8f5ef01da30df28a09fd5d14378b0e16d710552f" Jan 27 13:45:01 crc kubenswrapper[4786]: I0127 13:45:01.337884 4786 scope.go:117] "RemoveContainer" containerID="a64e8d5c052cea1d1c15a1794e4af433d23ec8a5de369dfb33911a9fbfff6316" Jan 27 13:45:01 crc kubenswrapper[4786]: I0127 13:45:01.354159 4786 scope.go:117] "RemoveContainer" containerID="a60420953614ee05eac611886326f87f4b11076588b138dada9e65c762ed4613" Jan 27 13:45:02 crc kubenswrapper[4786]: I0127 13:45:02.183778 4786 generic.go:334] "Generic (PLEG): container finished" podID="5a7b174a-09f3-43e6-8f54-9ee7a62761ed" containerID="3398cc527a001ac59395c7322ebdc33f33d6ee8aaf923dbd49a239d9510ea573" exitCode=0 Jan 27 13:45:02 crc kubenswrapper[4786]: I0127 13:45:02.183825 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" event={"ID":"5a7b174a-09f3-43e6-8f54-9ee7a62761ed","Type":"ContainerDied","Data":"3398cc527a001ac59395c7322ebdc33f33d6ee8aaf923dbd49a239d9510ea573"} Jan 27 13:45:03 crc kubenswrapper[4786]: I0127 13:45:03.480772 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" Jan 27 13:45:03 crc kubenswrapper[4786]: I0127 13:45:03.547754 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-secret-volume\") pod \"5a7b174a-09f3-43e6-8f54-9ee7a62761ed\" (UID: \"5a7b174a-09f3-43e6-8f54-9ee7a62761ed\") " Jan 27 13:45:03 crc kubenswrapper[4786]: I0127 13:45:03.547936 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-config-volume\") pod \"5a7b174a-09f3-43e6-8f54-9ee7a62761ed\" (UID: \"5a7b174a-09f3-43e6-8f54-9ee7a62761ed\") " Jan 27 13:45:03 crc kubenswrapper[4786]: I0127 13:45:03.547962 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nplfj\" (UniqueName: \"kubernetes.io/projected/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-kube-api-access-nplfj\") pod \"5a7b174a-09f3-43e6-8f54-9ee7a62761ed\" (UID: \"5a7b174a-09f3-43e6-8f54-9ee7a62761ed\") " Jan 27 13:45:03 crc kubenswrapper[4786]: I0127 13:45:03.551570 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-config-volume" (OuterVolumeSpecName: "config-volume") pod "5a7b174a-09f3-43e6-8f54-9ee7a62761ed" (UID: "5a7b174a-09f3-43e6-8f54-9ee7a62761ed"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 13:45:03 crc kubenswrapper[4786]: I0127 13:45:03.556785 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5a7b174a-09f3-43e6-8f54-9ee7a62761ed" (UID: "5a7b174a-09f3-43e6-8f54-9ee7a62761ed"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:45:03 crc kubenswrapper[4786]: I0127 13:45:03.557002 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-kube-api-access-nplfj" (OuterVolumeSpecName: "kube-api-access-nplfj") pod "5a7b174a-09f3-43e6-8f54-9ee7a62761ed" (UID: "5a7b174a-09f3-43e6-8f54-9ee7a62761ed"). InnerVolumeSpecName "kube-api-access-nplfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:45:03 crc kubenswrapper[4786]: I0127 13:45:03.649409 4786 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:45:03 crc kubenswrapper[4786]: I0127 13:45:03.649767 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 13:45:03 crc kubenswrapper[4786]: I0127 13:45:03.649780 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nplfj\" (UniqueName: \"kubernetes.io/projected/5a7b174a-09f3-43e6-8f54-9ee7a62761ed-kube-api-access-nplfj\") on node \"crc\" DevicePath \"\"" Jan 27 13:45:04 crc kubenswrapper[4786]: I0127 13:45:04.199186 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" event={"ID":"5a7b174a-09f3-43e6-8f54-9ee7a62761ed","Type":"ContainerDied","Data":"01d28aff0c1972c31bb3e71f156d83c817df3c694910df9b837c8cc0e32e9bbb"} Jan 27 13:45:04 crc kubenswrapper[4786]: I0127 13:45:04.199261 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01d28aff0c1972c31bb3e71f156d83c817df3c694910df9b837c8cc0e32e9bbb" Jan 27 13:45:04 crc kubenswrapper[4786]: I0127 13:45:04.199432 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492025-mtp2m" Jan 27 13:45:04 crc kubenswrapper[4786]: I0127 13:45:04.560554 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c"] Jan 27 13:45:04 crc kubenswrapper[4786]: I0127 13:45:04.568315 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491980-6pf7c"] Jan 27 13:45:05 crc kubenswrapper[4786]: I0127 13:45:05.476585 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf531c6-d1a1-4f65-af72-093ffdb034c1" path="/var/lib/kubelet/pods/3cf531c6-d1a1-4f65-af72-093ffdb034c1/volumes" Jan 27 13:45:05 crc kubenswrapper[4786]: I0127 13:45:05.946254 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-8567ddf8f4-cxtk8_3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb/keystone-api/0.log" Jan 27 13:45:08 crc kubenswrapper[4786]: I0127 13:45:08.465199 4786 scope.go:117] "RemoveContainer" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:45:08 crc kubenswrapper[4786]: E0127 13:45:08.465728 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:45:08 crc kubenswrapper[4786]: I0127 13:45:08.928042 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_a011c0d3-4039-465f-9ea6-acad60c397dd/memcached/0.log" Jan 27 13:45:09 crc kubenswrapper[4786]: I0127 13:45:09.491187 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-30f1-account-create-update-lccvr_f79b70f2-36e5-4532-bf69-a70a865afe9d/mariadb-account-create-update/0.log" Jan 27 13:45:09 crc kubenswrapper[4786]: I0127 13:45:09.533254 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:45:09 crc kubenswrapper[4786]: I0127 13:45:09.533320 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:45:09 crc kubenswrapper[4786]: I0127 13:45:09.533370 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:45:09 crc kubenswrapper[4786]: I0127 13:45:09.534154 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d"} pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:45:09 crc kubenswrapper[4786]: I0127 13:45:09.534215 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" containerID="cri-o://9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" gracePeriod=600 Jan 27 13:45:09 crc kubenswrapper[4786]: E0127 13:45:09.662117 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:45:10 crc kubenswrapper[4786]: I0127 13:45:10.041751 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-db-create-62wq6_49f94a84-88d4-4ac3-aa57-6c82d914b6e7/mariadb-database-create/0.log" Jan 27 13:45:10 crc kubenswrapper[4786]: I0127 13:45:10.252489 4786 generic.go:334] "Generic (PLEG): container finished" podID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" exitCode=0 Jan 27 13:45:10 crc kubenswrapper[4786]: I0127 13:45:10.252531 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerDied","Data":"9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d"} Jan 27 13:45:10 crc kubenswrapper[4786]: I0127 13:45:10.252570 4786 scope.go:117] "RemoveContainer" containerID="5a4c20f9aecd7c4c87050e3915ee6dd31d35d262bd6e9faa3426ca360fe926a7" Jan 27 13:45:10 crc kubenswrapper[4786]: I0127 13:45:10.253280 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:45:10 crc kubenswrapper[4786]: E0127 13:45:10.253687 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:45:10 crc kubenswrapper[4786]: I0127 13:45:10.603597 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-5daf-account-create-update-rl42c_ad4b32dd-6140-457a-88f3-b885d0a62c1e/mariadb-account-create-update/0.log" Jan 27 13:45:11 crc kubenswrapper[4786]: I0127 13:45:11.152494 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-db-create-crwnd_c5d4fb63-2bb4-4949-b567-ae619a7925af/mariadb-database-create/0.log" Jan 27 13:45:11 crc kubenswrapper[4786]: I0127 13:45:11.600345 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-1417-account-create-update-hqr9z_f2303d07-2542-43fe-9b22-243c5daa607b/mariadb-account-create-update/0.log" Jan 27 13:45:12 crc kubenswrapper[4786]: I0127 13:45:12.025107 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-db-create-f6c4f_4c686636-04e0-4ea9-beda-5dfd5c05b477/mariadb-database-create/0.log" Jan 27 13:45:12 crc kubenswrapper[4786]: I0127 13:45:12.529416 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_c7e0ce90-fda8-4074-b753-0df1531d7fcc/nova-kuttl-api-log/0.log" Jan 27 13:45:12 crc kubenswrapper[4786]: I0127 13:45:12.947518 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-cell-mapping-j2ns7_e4df6489-80cb-45c8-90b2-7fd2e9bca103/nova-manage/0.log" Jan 27 13:45:13 crc kubenswrapper[4786]: I0127 13:45:13.421156 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-0_10c3eb0f-3265-4520-afd7-0e002bcc5b81/nova-kuttl-cell0-conductor-conductor/0.log" Jan 27 13:45:13 crc kubenswrapper[4786]: I0127 13:45:13.836710 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-db-sync-lm94s_b52037f1-e48f-46e3-a5bc-a6ffa2fc7541/nova-kuttl-cell0-conductor-db-sync/0.log" Jan 27 13:45:14 crc kubenswrapper[4786]: I0127 13:45:14.262167 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-delete-bmr6f_89580574-f031-4b88-98e5-18075ffa20c6/nova-manage/5.log" Jan 27 13:45:14 crc kubenswrapper[4786]: I0127 13:45:14.707909 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-mapping-4snp9_198b59b6-ce67-44fa-bf96-4c080e830106/nova-manage/0.log" Jan 27 13:45:15 crc kubenswrapper[4786]: I0127 13:45:15.170155 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-0_616fd9dd-c4dc-45a7-ab66-358fc07acea0/nova-kuttl-cell1-conductor-conductor/0.log" Jan 27 13:45:15 crc kubenswrapper[4786]: I0127 13:45:15.590183 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-db-sync-ssjwj_fac61271-09f6-4a27-bb20-edf0cb037d72/nova-kuttl-cell1-conductor-db-sync/0.log" Jan 27 13:45:15 crc kubenswrapper[4786]: I0127 13:45:15.995031 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-novncproxy-0_0a40bfc8-a365-4329-8362-ecd8b784f52d/nova-kuttl-cell1-novncproxy-novncproxy/0.log" Jan 27 13:45:16 crc kubenswrapper[4786]: I0127 13:45:16.466380 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_1d7007af-1e26-4b89-a761-5921086ff009/nova-kuttl-metadata-log/0.log" Jan 27 13:45:16 crc kubenswrapper[4786]: I0127 13:45:16.913581 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-scheduler-0_4d2d365a-46c9-4f47-9501-654446cbd40d/nova-kuttl-scheduler-scheduler/0.log" Jan 27 13:45:17 crc kubenswrapper[4786]: I0127 13:45:17.325491 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_7a4e6dad-e854-4ecd-9441-04e72893ea29/galera/0.log" Jan 27 13:45:18 crc kubenswrapper[4786]: I0127 13:45:18.547397 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_d5e1220e-a41a-4e46-890f-3502e548bf66/galera/0.log" Jan 27 13:45:19 crc kubenswrapper[4786]: I0127 13:45:19.043530 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_ebe93b02-f04c-48d1-8f5f-68e113379180/openstackclient/0.log" Jan 27 13:45:19 crc kubenswrapper[4786]: I0127 13:45:19.464759 4786 scope.go:117] "RemoveContainer" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:45:19 crc kubenswrapper[4786]: E0127 13:45:19.465003 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:45:19 crc kubenswrapper[4786]: I0127 13:45:19.480829 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-57fbd5dfd8-mlllb_9e2844b5-b7ec-43fd-873a-6cdaa879c676/placement-log/0.log" Jan 27 13:45:19 crc kubenswrapper[4786]: I0127 13:45:19.893948 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_f76eacb2-75ca-46c4-badb-b1404b018bf6/rabbitmq/0.log" Jan 27 13:45:20 crc kubenswrapper[4786]: I0127 13:45:20.332872 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_9f2857cb-9399-4563-b68e-3b51cbd47f80/rabbitmq/0.log" Jan 27 13:45:20 crc kubenswrapper[4786]: I0127 13:45:20.768551 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_c8472c3b-b877-4e6c-992f-f4146f81e3fc/rabbitmq/0.log" Jan 27 13:45:22 crc kubenswrapper[4786]: I0127 13:45:22.466491 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:45:22 crc kubenswrapper[4786]: E0127 13:45:22.476311 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:45:32 crc kubenswrapper[4786]: I0127 13:45:32.467388 4786 scope.go:117] "RemoveContainer" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:45:32 crc kubenswrapper[4786]: E0127 13:45:32.468531 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:45:35 crc kubenswrapper[4786]: I0127 13:45:35.465252 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:45:35 crc kubenswrapper[4786]: E0127 13:45:35.466211 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:45:47 crc kubenswrapper[4786]: I0127 13:45:47.470536 4786 scope.go:117] "RemoveContainer" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:45:47 crc kubenswrapper[4786]: E0127 13:45:47.471399 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:45:49 crc kubenswrapper[4786]: I0127 13:45:49.465308 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:45:49 crc kubenswrapper[4786]: E0127 13:45:49.465537 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:45:52 crc kubenswrapper[4786]: I0127 13:45:52.081012 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb_7091c8b9-67ad-488b-b8f1-3c24875d9436/extract/0.log" Jan 27 13:45:52 crc kubenswrapper[4786]: I0127 13:45:52.529193 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p_3f10db36-4147-4749-8356-334a343efd90/extract/0.log" Jan 27 13:45:52 crc kubenswrapper[4786]: I0127 13:45:52.933480 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-5mpdp_a5ebc5e9-fc27-4326-927a-c791f35a71e9/manager/0.log" Jan 27 13:45:53 crc kubenswrapper[4786]: I0127 13:45:53.363942 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-cpn89_37986633-7f55-41aa-b83d-7f74a5640f2f/manager/0.log" Jan 27 13:45:53 crc kubenswrapper[4786]: I0127 13:45:53.790211 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-w6r6x_84881c35-4d84-4c91-b401-0bf1d7de9314/manager/0.log" Jan 27 13:45:54 crc kubenswrapper[4786]: I0127 13:45:54.200417 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-z5xnb_4a240cd4-c49f-4716-80bb-6d1ba632a32c/manager/0.log" Jan 27 13:45:54 crc kubenswrapper[4786]: I0127 13:45:54.634070 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-fwlxc_e607a576-31c5-4ef7-82ea-66851b5a33d2/manager/0.log" Jan 27 13:45:55 crc kubenswrapper[4786]: I0127 13:45:55.047524 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wnmcp_7e2710fc-7453-40a9-81c0-ccec15d86a77/manager/0.log" Jan 27 13:45:55 crc kubenswrapper[4786]: I0127 13:45:55.585268 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-xdrln_43d55b3f-3bd8-4083-9e0d-f398938a47e6/manager/0.log" Jan 27 13:45:56 crc kubenswrapper[4786]: I0127 13:45:56.012414 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-lspl5_cfc6eb47-18a2-442a-a1d8-ddec61462156/manager/0.log" Jan 27 13:45:56 crc kubenswrapper[4786]: I0127 13:45:56.524995 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-6p8s9_5614e239-8fc3-4091-aad4-55a217ca1092/manager/0.log" Jan 27 13:45:56 crc kubenswrapper[4786]: I0127 13:45:56.929598 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-c2c8q_4058f919-d5c7-4f73-9c8a-432409f9022a/manager/0.log" Jan 27 13:45:57 crc kubenswrapper[4786]: I0127 13:45:57.356820 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z_88c4fa6a-bb1a-46fe-a863-473b9ec66ce7/manager/0.log" Jan 27 13:45:57 crc kubenswrapper[4786]: I0127 13:45:57.791582 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-m75zn_1eed4ae8-357f-4388-a9d3-9382b0fc84ec/manager/0.log" Jan 27 13:45:58 crc kubenswrapper[4786]: I0127 13:45:58.558263 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-754b45c6dd-fws6l_e0d981b7-9481-4f08-a283-a274d47087f9/manager/0.log" Jan 27 13:45:58 crc kubenswrapper[4786]: I0127 13:45:58.970474 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-pjw9n_198411ea-9abf-4fe2-b7bb-95be72d0aa84/registry-server/0.log" Jan 27 13:45:59 crc kubenswrapper[4786]: I0127 13:45:59.407415 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-pb5lk_5c18c2c6-04e6-4b87-b92d-586823b20ac1/manager/0.log" Jan 27 13:45:59 crc kubenswrapper[4786]: I0127 13:45:59.834626 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt_45599804-69cd-44f0-bb76-a15e5a3ff700/manager/0.log" Jan 27 13:46:00 crc kubenswrapper[4786]: I0127 13:46:00.464453 4786 scope.go:117] "RemoveContainer" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:46:00 crc kubenswrapper[4786]: E0127 13:46:00.464775 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-bmr6f_nova-kuttl-default(89580574-f031-4b88-98e5-18075ffa20c6)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" Jan 27 13:46:00 crc kubenswrapper[4786]: I0127 13:46:00.597583 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-596b94879d-2gz6v_d7f033ce-43f9-425f-a74c-65735b66f5b8/manager/0.log" Jan 27 13:46:01 crc kubenswrapper[4786]: I0127 13:46:01.012993 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7mc6w_e525b58f-7304-40e1-9fdc-949f43bb2cba/registry-server/0.log" Jan 27 13:46:01 crc kubenswrapper[4786]: I0127 13:46:01.402063 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-cv96v_42803b12-da36-48df-b9bb-ed3d4555b7b4/manager/0.log" Jan 27 13:46:01 crc kubenswrapper[4786]: I0127 13:46:01.447812 4786 scope.go:117] "RemoveContainer" containerID="32509229f88f585984da1ef764f58e212b324ac35555669edaf5a7111aef9858" Jan 27 13:46:01 crc kubenswrapper[4786]: I0127 13:46:01.844436 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-ghr9t_250d0fff-09d2-49be-94a3-6eefdd3aab06/manager/0.log" Jan 27 13:46:02 crc kubenswrapper[4786]: I0127 13:46:02.292393 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-km9zd_fd3a1177-720b-4e0d-83d9-9ea046369690/operator/0.log" Jan 27 13:46:02 crc kubenswrapper[4786]: I0127 13:46:02.725413 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-qmp6d_9d2d7f2c-4522-45bf-a12d-1eb7cc11041e/manager/0.log" Jan 27 13:46:03 crc kubenswrapper[4786]: I0127 13:46:03.169802 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-fkbbc_39341414-eb82-400d-96ce-e546dd32d15b/manager/0.log" Jan 27 13:46:03 crc kubenswrapper[4786]: I0127 13:46:03.644029 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-8whbf_7732c732-60c5-476d-bf01-ed83c38b4d35/manager/0.log" Jan 27 13:46:04 crc kubenswrapper[4786]: I0127 13:46:04.066743 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-q7glc_f7bba046-60b2-4fa4-96ec-976f73b1ff7c/manager/0.log" Jan 27 13:46:04 crc kubenswrapper[4786]: I0127 13:46:04.465349 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:46:04 crc kubenswrapper[4786]: E0127 13:46:04.465858 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:46:09 crc kubenswrapper[4786]: I0127 13:46:09.263310 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-8567ddf8f4-cxtk8_3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb/keystone-api/0.log" Jan 27 13:46:11 crc kubenswrapper[4786]: I0127 13:46:11.465859 4786 scope.go:117] "RemoveContainer" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:46:12 crc kubenswrapper[4786]: I0127 13:46:12.551718 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_a011c0d3-4039-465f-9ea6-acad60c397dd/memcached/0.log" Jan 27 13:46:12 crc kubenswrapper[4786]: I0127 13:46:12.754117 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" event={"ID":"89580574-f031-4b88-98e5-18075ffa20c6","Type":"ContainerStarted","Data":"d69cfaff77b7e95018000d73c5e0f80cfb2b0af75f7adeb8aecaf57611c269b6"} Jan 27 13:46:13 crc kubenswrapper[4786]: I0127 13:46:13.126765 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-30f1-account-create-update-lccvr_f79b70f2-36e5-4532-bf69-a70a865afe9d/mariadb-account-create-update/0.log" Jan 27 13:46:13 crc kubenswrapper[4786]: I0127 13:46:13.788060 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f"] Jan 27 13:46:13 crc kubenswrapper[4786]: I0127 13:46:13.788465 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" containerID="cri-o://d69cfaff77b7e95018000d73c5e0f80cfb2b0af75f7adeb8aecaf57611c269b6" gracePeriod=30 Jan 27 13:46:14 crc kubenswrapper[4786]: I0127 13:46:14.002637 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-db-create-62wq6_49f94a84-88d4-4ac3-aa57-6c82d914b6e7/mariadb-database-create/0.log" Jan 27 13:46:14 crc kubenswrapper[4786]: I0127 13:46:14.590661 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-5daf-account-create-update-rl42c_ad4b32dd-6140-457a-88f3-b885d0a62c1e/mariadb-account-create-update/0.log" Jan 27 13:46:15 crc kubenswrapper[4786]: I0127 13:46:15.069172 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-db-create-crwnd_c5d4fb63-2bb4-4949-b567-ae619a7925af/mariadb-database-create/0.log" Jan 27 13:46:15 crc kubenswrapper[4786]: I0127 13:46:15.482898 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-1417-account-create-update-hqr9z_f2303d07-2542-43fe-9b22-243c5daa607b/mariadb-account-create-update/0.log" Jan 27 13:46:15 crc kubenswrapper[4786]: I0127 13:46:15.943048 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-db-create-f6c4f_4c686636-04e0-4ea9-beda-5dfd5c05b477/mariadb-database-create/0.log" Jan 27 13:46:16 crc kubenswrapper[4786]: I0127 13:46:16.444627 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_c7e0ce90-fda8-4074-b753-0df1531d7fcc/nova-kuttl-api-log/0.log" Jan 27 13:46:16 crc kubenswrapper[4786]: I0127 13:46:16.840180 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-cell-mapping-j2ns7_e4df6489-80cb-45c8-90b2-7fd2e9bca103/nova-manage/0.log" Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.311559 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.345648 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-0_10c3eb0f-3265-4520-afd7-0e002bcc5b81/nova-kuttl-cell0-conductor-conductor/0.log" Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.409073 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89580574-f031-4b88-98e5-18075ffa20c6-scripts\") pod \"89580574-f031-4b88-98e5-18075ffa20c6\" (UID: \"89580574-f031-4b88-98e5-18075ffa20c6\") " Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.409255 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89580574-f031-4b88-98e5-18075ffa20c6-config-data\") pod \"89580574-f031-4b88-98e5-18075ffa20c6\" (UID: \"89580574-f031-4b88-98e5-18075ffa20c6\") " Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.409304 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnknw\" (UniqueName: \"kubernetes.io/projected/89580574-f031-4b88-98e5-18075ffa20c6-kube-api-access-gnknw\") pod \"89580574-f031-4b88-98e5-18075ffa20c6\" (UID: \"89580574-f031-4b88-98e5-18075ffa20c6\") " Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.415220 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89580574-f031-4b88-98e5-18075ffa20c6-scripts" (OuterVolumeSpecName: "scripts") pod "89580574-f031-4b88-98e5-18075ffa20c6" (UID: "89580574-f031-4b88-98e5-18075ffa20c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.421905 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89580574-f031-4b88-98e5-18075ffa20c6-kube-api-access-gnknw" (OuterVolumeSpecName: "kube-api-access-gnknw") pod "89580574-f031-4b88-98e5-18075ffa20c6" (UID: "89580574-f031-4b88-98e5-18075ffa20c6"). InnerVolumeSpecName "kube-api-access-gnknw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.435430 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89580574-f031-4b88-98e5-18075ffa20c6-config-data" (OuterVolumeSpecName: "config-data") pod "89580574-f031-4b88-98e5-18075ffa20c6" (UID: "89580574-f031-4b88-98e5-18075ffa20c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.511718 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89580574-f031-4b88-98e5-18075ffa20c6-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.511777 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89580574-f031-4b88-98e5-18075ffa20c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.511791 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnknw\" (UniqueName: \"kubernetes.io/projected/89580574-f031-4b88-98e5-18075ffa20c6-kube-api-access-gnknw\") on node \"crc\" DevicePath \"\"" Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.803848 4786 generic.go:334] "Generic (PLEG): container finished" podID="89580574-f031-4b88-98e5-18075ffa20c6" containerID="d69cfaff77b7e95018000d73c5e0f80cfb2b0af75f7adeb8aecaf57611c269b6" exitCode=2 Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.804177 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" event={"ID":"89580574-f031-4b88-98e5-18075ffa20c6","Type":"ContainerDied","Data":"d69cfaff77b7e95018000d73c5e0f80cfb2b0af75f7adeb8aecaf57611c269b6"} Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.804268 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" event={"ID":"89580574-f031-4b88-98e5-18075ffa20c6","Type":"ContainerDied","Data":"aaf8cfcdc930147123fad8fa7a1c95cd628f321f880a66113e73b0bbd6f66f1c"} Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.804383 4786 scope.go:117] "RemoveContainer" containerID="d69cfaff77b7e95018000d73c5e0f80cfb2b0af75f7adeb8aecaf57611c269b6" Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.804567 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f" Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.829052 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f"] Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.833496 4786 scope.go:117] "RemoveContainer" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.836970 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-db-sync-lm94s_b52037f1-e48f-46e3-a5bc-a6ffa2fc7541/nova-kuttl-cell0-conductor-db-sync/0.log" Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.837839 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-bmr6f"] Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.880078 4786 scope.go:117] "RemoveContainer" containerID="d69cfaff77b7e95018000d73c5e0f80cfb2b0af75f7adeb8aecaf57611c269b6" Jan 27 13:46:17 crc kubenswrapper[4786]: E0127 13:46:17.880536 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d69cfaff77b7e95018000d73c5e0f80cfb2b0af75f7adeb8aecaf57611c269b6\": container with ID starting with d69cfaff77b7e95018000d73c5e0f80cfb2b0af75f7adeb8aecaf57611c269b6 not found: ID does not exist" containerID="d69cfaff77b7e95018000d73c5e0f80cfb2b0af75f7adeb8aecaf57611c269b6" Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.880565 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d69cfaff77b7e95018000d73c5e0f80cfb2b0af75f7adeb8aecaf57611c269b6"} err="failed to get container status \"d69cfaff77b7e95018000d73c5e0f80cfb2b0af75f7adeb8aecaf57611c269b6\": rpc error: code = NotFound desc = could not find container \"d69cfaff77b7e95018000d73c5e0f80cfb2b0af75f7adeb8aecaf57611c269b6\": container with ID starting with d69cfaff77b7e95018000d73c5e0f80cfb2b0af75f7adeb8aecaf57611c269b6 not found: ID does not exist" Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.880586 4786 scope.go:117] "RemoveContainer" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:46:17 crc kubenswrapper[4786]: E0127 13:46:17.880960 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637\": container with ID starting with 212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637 not found: ID does not exist" containerID="212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637" Jan 27 13:46:17 crc kubenswrapper[4786]: I0127 13:46:17.881129 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637"} err="failed to get container status \"212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637\": rpc error: code = NotFound desc = could not find container \"212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637\": container with ID starting with 212517e4d39a27c57e3cfa55d312f2b61a827e446f0c16acda35e6e350b39637 not found: ID does not exist" Jan 27 13:46:18 crc kubenswrapper[4786]: I0127 13:46:18.657991 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-mapping-4snp9_198b59b6-ce67-44fa-bf96-4c080e830106/nova-manage/0.log" Jan 27 13:46:19 crc kubenswrapper[4786]: I0127 13:46:19.130029 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-0_616fd9dd-c4dc-45a7-ab66-358fc07acea0/nova-kuttl-cell1-conductor-conductor/0.log" Jan 27 13:46:19 crc kubenswrapper[4786]: I0127 13:46:19.465379 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:46:19 crc kubenswrapper[4786]: E0127 13:46:19.465673 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:46:19 crc kubenswrapper[4786]: I0127 13:46:19.474991 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89580574-f031-4b88-98e5-18075ffa20c6" path="/var/lib/kubelet/pods/89580574-f031-4b88-98e5-18075ffa20c6/volumes" Jan 27 13:46:19 crc kubenswrapper[4786]: I0127 13:46:19.528575 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-db-sync-ssjwj_fac61271-09f6-4a27-bb20-edf0cb037d72/nova-kuttl-cell1-conductor-db-sync/0.log" Jan 27 13:46:19 crc kubenswrapper[4786]: I0127 13:46:19.952096 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-novncproxy-0_0a40bfc8-a365-4329-8362-ecd8b784f52d/nova-kuttl-cell1-novncproxy-novncproxy/0.log" Jan 27 13:46:20 crc kubenswrapper[4786]: I0127 13:46:20.462139 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_1d7007af-1e26-4b89-a761-5921086ff009/nova-kuttl-metadata-log/0.log" Jan 27 13:46:20 crc kubenswrapper[4786]: I0127 13:46:20.929064 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-scheduler-0_4d2d365a-46c9-4f47-9501-654446cbd40d/nova-kuttl-scheduler-scheduler/0.log" Jan 27 13:46:21 crc kubenswrapper[4786]: I0127 13:46:21.433151 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_7a4e6dad-e854-4ecd-9441-04e72893ea29/galera/0.log" Jan 27 13:46:21 crc kubenswrapper[4786]: I0127 13:46:21.913163 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_d5e1220e-a41a-4e46-890f-3502e548bf66/galera/0.log" Jan 27 13:46:22 crc kubenswrapper[4786]: I0127 13:46:22.369329 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_ebe93b02-f04c-48d1-8f5f-68e113379180/openstackclient/0.log" Jan 27 13:46:22 crc kubenswrapper[4786]: I0127 13:46:22.817516 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-57fbd5dfd8-mlllb_9e2844b5-b7ec-43fd-873a-6cdaa879c676/placement-log/0.log" Jan 27 13:46:23 crc kubenswrapper[4786]: I0127 13:46:23.242095 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_f76eacb2-75ca-46c4-badb-b1404b018bf6/rabbitmq/0.log" Jan 27 13:46:23 crc kubenswrapper[4786]: I0127 13:46:23.687414 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_9f2857cb-9399-4563-b68e-3b51cbd47f80/rabbitmq/0.log" Jan 27 13:46:24 crc kubenswrapper[4786]: I0127 13:46:24.150997 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_c8472c3b-b877-4e6c-992f-f4146f81e3fc/rabbitmq/0.log" Jan 27 13:46:34 crc kubenswrapper[4786]: I0127 13:46:34.464789 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:46:34 crc kubenswrapper[4786]: E0127 13:46:34.465666 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:46:46 crc kubenswrapper[4786]: I0127 13:46:46.465111 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:46:46 crc kubenswrapper[4786]: E0127 13:46:46.465930 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:46:57 crc kubenswrapper[4786]: I0127 13:46:57.207833 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb_7091c8b9-67ad-488b-b8f1-3c24875d9436/extract/0.log" Jan 27 13:46:57 crc kubenswrapper[4786]: I0127 13:46:57.667514 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p_3f10db36-4147-4749-8356-334a343efd90/extract/0.log" Jan 27 13:46:58 crc kubenswrapper[4786]: I0127 13:46:58.125432 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-5mpdp_a5ebc5e9-fc27-4326-927a-c791f35a71e9/manager/0.log" Jan 27 13:46:58 crc kubenswrapper[4786]: I0127 13:46:58.466058 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:46:58 crc kubenswrapper[4786]: E0127 13:46:58.466491 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:46:58 crc kubenswrapper[4786]: I0127 13:46:58.521947 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-cpn89_37986633-7f55-41aa-b83d-7f74a5640f2f/manager/0.log" Jan 27 13:46:58 crc kubenswrapper[4786]: I0127 13:46:58.933658 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-w6r6x_84881c35-4d84-4c91-b401-0bf1d7de9314/manager/0.log" Jan 27 13:46:59 crc kubenswrapper[4786]: I0127 13:46:59.376507 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-z5xnb_4a240cd4-c49f-4716-80bb-6d1ba632a32c/manager/0.log" Jan 27 13:46:59 crc kubenswrapper[4786]: I0127 13:46:59.792776 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-fwlxc_e607a576-31c5-4ef7-82ea-66851b5a33d2/manager/0.log" Jan 27 13:47:00 crc kubenswrapper[4786]: I0127 13:47:00.189876 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wnmcp_7e2710fc-7453-40a9-81c0-ccec15d86a77/manager/0.log" Jan 27 13:47:00 crc kubenswrapper[4786]: I0127 13:47:00.717239 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-xdrln_43d55b3f-3bd8-4083-9e0d-f398938a47e6/manager/0.log" Jan 27 13:47:01 crc kubenswrapper[4786]: I0127 13:47:01.113455 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-lspl5_cfc6eb47-18a2-442a-a1d8-ddec61462156/manager/0.log" Jan 27 13:47:01 crc kubenswrapper[4786]: I0127 13:47:01.582849 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-6p8s9_5614e239-8fc3-4091-aad4-55a217ca1092/manager/0.log" Jan 27 13:47:01 crc kubenswrapper[4786]: I0127 13:47:01.985797 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-c2c8q_4058f919-d5c7-4f73-9c8a-432409f9022a/manager/0.log" Jan 27 13:47:02 crc kubenswrapper[4786]: I0127 13:47:02.448026 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z_88c4fa6a-bb1a-46fe-a863-473b9ec66ce7/manager/0.log" Jan 27 13:47:02 crc kubenswrapper[4786]: I0127 13:47:02.856045 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-m75zn_1eed4ae8-357f-4388-a9d3-9382b0fc84ec/manager/0.log" Jan 27 13:47:03 crc kubenswrapper[4786]: I0127 13:47:03.699879 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-754b45c6dd-fws6l_e0d981b7-9481-4f08-a283-a274d47087f9/manager/0.log" Jan 27 13:47:04 crc kubenswrapper[4786]: I0127 13:47:04.136066 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-pjw9n_198411ea-9abf-4fe2-b7bb-95be72d0aa84/registry-server/0.log" Jan 27 13:47:04 crc kubenswrapper[4786]: I0127 13:47:04.614035 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-pb5lk_5c18c2c6-04e6-4b87-b92d-586823b20ac1/manager/0.log" Jan 27 13:47:05 crc kubenswrapper[4786]: I0127 13:47:05.033703 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt_45599804-69cd-44f0-bb76-a15e5a3ff700/manager/0.log" Jan 27 13:47:05 crc kubenswrapper[4786]: I0127 13:47:05.800468 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-596b94879d-2gz6v_d7f033ce-43f9-425f-a74c-65735b66f5b8/manager/0.log" Jan 27 13:47:06 crc kubenswrapper[4786]: I0127 13:47:06.261469 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7mc6w_e525b58f-7304-40e1-9fdc-949f43bb2cba/registry-server/0.log" Jan 27 13:47:06 crc kubenswrapper[4786]: I0127 13:47:06.670584 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-cv96v_42803b12-da36-48df-b9bb-ed3d4555b7b4/manager/0.log" Jan 27 13:47:07 crc kubenswrapper[4786]: I0127 13:47:07.138026 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-ghr9t_250d0fff-09d2-49be-94a3-6eefdd3aab06/manager/0.log" Jan 27 13:47:07 crc kubenswrapper[4786]: I0127 13:47:07.589849 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-km9zd_fd3a1177-720b-4e0d-83d9-9ea046369690/operator/0.log" Jan 27 13:47:08 crc kubenswrapper[4786]: I0127 13:47:08.045710 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-qmp6d_9d2d7f2c-4522-45bf-a12d-1eb7cc11041e/manager/0.log" Jan 27 13:47:08 crc kubenswrapper[4786]: I0127 13:47:08.498043 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-fkbbc_39341414-eb82-400d-96ce-e546dd32d15b/manager/0.log" Jan 27 13:47:08 crc kubenswrapper[4786]: I0127 13:47:08.927571 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-8whbf_7732c732-60c5-476d-bf01-ed83c38b4d35/manager/0.log" Jan 27 13:47:09 crc kubenswrapper[4786]: I0127 13:47:09.337638 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-q7glc_f7bba046-60b2-4fa4-96ec-976f73b1ff7c/manager/0.log" Jan 27 13:47:12 crc kubenswrapper[4786]: I0127 13:47:12.465879 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:47:12 crc kubenswrapper[4786]: E0127 13:47:12.466852 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:47:26 crc kubenswrapper[4786]: I0127 13:47:26.464549 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:47:26 crc kubenswrapper[4786]: E0127 13:47:26.465368 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:47:39 crc kubenswrapper[4786]: I0127 13:47:39.465158 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:47:39 crc kubenswrapper[4786]: E0127 13:47:39.465850 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.503578 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-r7tdj/must-gather-ztp6m"] Jan 27 13:47:40 crc kubenswrapper[4786]: E0127 13:47:40.512242 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.512293 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: E0127 13:47:40.512304 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.512311 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: E0127 13:47:40.512320 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.512329 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: E0127 13:47:40.512351 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.512357 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: E0127 13:47:40.512367 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a7b174a-09f3-43e6-8f54-9ee7a62761ed" containerName="collect-profiles" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.512373 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a7b174a-09f3-43e6-8f54-9ee7a62761ed" containerName="collect-profiles" Jan 27 13:47:40 crc kubenswrapper[4786]: E0127 13:47:40.512386 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.512392 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: E0127 13:47:40.512404 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.512410 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.512650 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.512660 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.512667 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.512677 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.512683 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a7b174a-09f3-43e6-8f54-9ee7a62761ed" containerName="collect-profiles" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.512696 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: E0127 13:47:40.512864 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.512871 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.513021 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.513042 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="89580574-f031-4b88-98e5-18075ffa20c6" containerName="nova-manage" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.513819 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r7tdj/must-gather-ztp6m" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.517291 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-r7tdj"/"kube-root-ca.crt" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.517583 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-r7tdj/must-gather-ztp6m"] Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.518043 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-r7tdj"/"openshift-service-ca.crt" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.661090 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6aee19f-6886-40ca-a074-299f4806ca27-must-gather-output\") pod \"must-gather-ztp6m\" (UID: \"e6aee19f-6886-40ca-a074-299f4806ca27\") " pod="openshift-must-gather-r7tdj/must-gather-ztp6m" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.661157 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-664kx\" (UniqueName: \"kubernetes.io/projected/e6aee19f-6886-40ca-a074-299f4806ca27-kube-api-access-664kx\") pod \"must-gather-ztp6m\" (UID: \"e6aee19f-6886-40ca-a074-299f4806ca27\") " pod="openshift-must-gather-r7tdj/must-gather-ztp6m" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.763244 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6aee19f-6886-40ca-a074-299f4806ca27-must-gather-output\") pod \"must-gather-ztp6m\" (UID: \"e6aee19f-6886-40ca-a074-299f4806ca27\") " pod="openshift-must-gather-r7tdj/must-gather-ztp6m" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.763311 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-664kx\" (UniqueName: \"kubernetes.io/projected/e6aee19f-6886-40ca-a074-299f4806ca27-kube-api-access-664kx\") pod \"must-gather-ztp6m\" (UID: \"e6aee19f-6886-40ca-a074-299f4806ca27\") " pod="openshift-must-gather-r7tdj/must-gather-ztp6m" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.763767 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6aee19f-6886-40ca-a074-299f4806ca27-must-gather-output\") pod \"must-gather-ztp6m\" (UID: \"e6aee19f-6886-40ca-a074-299f4806ca27\") " pod="openshift-must-gather-r7tdj/must-gather-ztp6m" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.780895 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-664kx\" (UniqueName: \"kubernetes.io/projected/e6aee19f-6886-40ca-a074-299f4806ca27-kube-api-access-664kx\") pod \"must-gather-ztp6m\" (UID: \"e6aee19f-6886-40ca-a074-299f4806ca27\") " pod="openshift-must-gather-r7tdj/must-gather-ztp6m" Jan 27 13:47:40 crc kubenswrapper[4786]: I0127 13:47:40.830903 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r7tdj/must-gather-ztp6m" Jan 27 13:47:41 crc kubenswrapper[4786]: I0127 13:47:41.307425 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-r7tdj/must-gather-ztp6m"] Jan 27 13:47:41 crc kubenswrapper[4786]: I0127 13:47:41.485047 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r7tdj/must-gather-ztp6m" event={"ID":"e6aee19f-6886-40ca-a074-299f4806ca27","Type":"ContainerStarted","Data":"f1d4c0e1f79a265ea73015b0668076f963b2acf64c1f17246cd5b4c61dcf7aba"} Jan 27 13:47:51 crc kubenswrapper[4786]: I0127 13:47:51.465241 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:47:51 crc kubenswrapper[4786]: E0127 13:47:51.467065 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:47:58 crc kubenswrapper[4786]: I0127 13:47:58.612531 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r7tdj/must-gather-ztp6m" event={"ID":"e6aee19f-6886-40ca-a074-299f4806ca27","Type":"ContainerStarted","Data":"a7e485d3183b47d77b68662b159d77bd082e3948b3366e2209154e64e93673cd"} Jan 27 13:47:58 crc kubenswrapper[4786]: I0127 13:47:58.613076 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r7tdj/must-gather-ztp6m" event={"ID":"e6aee19f-6886-40ca-a074-299f4806ca27","Type":"ContainerStarted","Data":"d54951db120c28c630914acfc63efe21e1153d2fad606b95b39d6d74c39a5e1c"} Jan 27 13:47:58 crc kubenswrapper[4786]: I0127 13:47:58.628861 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-r7tdj/must-gather-ztp6m" podStartSLOduration=2.489409036 podStartE2EDuration="18.628833697s" podCreationTimestamp="2026-01-27 13:47:40 +0000 UTC" firstStartedPulling="2026-01-27 13:47:41.312835784 +0000 UTC m=+2444.523449913" lastFinishedPulling="2026-01-27 13:47:57.452260455 +0000 UTC m=+2460.662874574" observedRunningTime="2026-01-27 13:47:58.625854026 +0000 UTC m=+2461.836468135" watchObservedRunningTime="2026-01-27 13:47:58.628833697 +0000 UTC m=+2461.839447816" Jan 27 13:48:03 crc kubenswrapper[4786]: I0127 13:48:03.465424 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:48:03 crc kubenswrapper[4786]: E0127 13:48:03.466279 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:48:15 crc kubenswrapper[4786]: I0127 13:48:15.464959 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:48:15 crc kubenswrapper[4786]: E0127 13:48:15.465754 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:48:26 crc kubenswrapper[4786]: I0127 13:48:26.465198 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:48:26 crc kubenswrapper[4786]: E0127 13:48:26.465983 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:48:40 crc kubenswrapper[4786]: I0127 13:48:40.465218 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:48:40 crc kubenswrapper[4786]: E0127 13:48:40.466040 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:48:45 crc kubenswrapper[4786]: I0127 13:48:45.038835 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-62wq6"] Jan 27 13:48:45 crc kubenswrapper[4786]: I0127 13:48:45.048410 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-crwnd"] Jan 27 13:48:45 crc kubenswrapper[4786]: I0127 13:48:45.058133 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-62wq6"] Jan 27 13:48:45 crc kubenswrapper[4786]: I0127 13:48:45.066919 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-crwnd"] Jan 27 13:48:45 crc kubenswrapper[4786]: I0127 13:48:45.474676 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49f94a84-88d4-4ac3-aa57-6c82d914b6e7" path="/var/lib/kubelet/pods/49f94a84-88d4-4ac3-aa57-6c82d914b6e7/volumes" Jan 27 13:48:45 crc kubenswrapper[4786]: I0127 13:48:45.475192 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5d4fb63-2bb4-4949-b567-ae619a7925af" path="/var/lib/kubelet/pods/c5d4fb63-2bb4-4949-b567-ae619a7925af/volumes" Jan 27 13:48:46 crc kubenswrapper[4786]: I0127 13:48:46.030733 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c"] Jan 27 13:48:46 crc kubenswrapper[4786]: I0127 13:48:46.039320 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z"] Jan 27 13:48:46 crc kubenswrapper[4786]: I0127 13:48:46.047779 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-5daf-account-create-update-rl42c"] Jan 27 13:48:46 crc kubenswrapper[4786]: I0127 13:48:46.059234 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-30f1-account-create-update-lccvr"] Jan 27 13:48:46 crc kubenswrapper[4786]: I0127 13:48:46.068326 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-f6c4f"] Jan 27 13:48:46 crc kubenswrapper[4786]: I0127 13:48:46.076299 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-f6c4f"] Jan 27 13:48:46 crc kubenswrapper[4786]: I0127 13:48:46.082989 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-30f1-account-create-update-lccvr"] Jan 27 13:48:46 crc kubenswrapper[4786]: I0127 13:48:46.089572 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-1417-account-create-update-hqr9z"] Jan 27 13:48:47 crc kubenswrapper[4786]: I0127 13:48:47.474846 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c686636-04e0-4ea9-beda-5dfd5c05b477" path="/var/lib/kubelet/pods/4c686636-04e0-4ea9-beda-5dfd5c05b477/volumes" Jan 27 13:48:47 crc kubenswrapper[4786]: I0127 13:48:47.475717 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad4b32dd-6140-457a-88f3-b885d0a62c1e" path="/var/lib/kubelet/pods/ad4b32dd-6140-457a-88f3-b885d0a62c1e/volumes" Jan 27 13:48:47 crc kubenswrapper[4786]: I0127 13:48:47.476274 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2303d07-2542-43fe-9b22-243c5daa607b" path="/var/lib/kubelet/pods/f2303d07-2542-43fe-9b22-243c5daa607b/volumes" Jan 27 13:48:47 crc kubenswrapper[4786]: I0127 13:48:47.476791 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79b70f2-36e5-4532-bf69-a70a865afe9d" path="/var/lib/kubelet/pods/f79b70f2-36e5-4532-bf69-a70a865afe9d/volumes" Jan 27 13:48:51 crc kubenswrapper[4786]: I0127 13:48:51.464538 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:48:51 crc kubenswrapper[4786]: E0127 13:48:51.465107 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:48:54 crc kubenswrapper[4786]: I0127 13:48:54.048092 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s"] Jan 27 13:48:54 crc kubenswrapper[4786]: I0127 13:48:54.060071 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lm94s"] Jan 27 13:48:55 crc kubenswrapper[4786]: I0127 13:48:55.474138 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b52037f1-e48f-46e3-a5bc-a6ffa2fc7541" path="/var/lib/kubelet/pods/b52037f1-e48f-46e3-a5bc-a6ffa2fc7541/volumes" Jan 27 13:49:00 crc kubenswrapper[4786]: I0127 13:49:00.446348 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb_7091c8b9-67ad-488b-b8f1-3c24875d9436/util/0.log" Jan 27 13:49:00 crc kubenswrapper[4786]: I0127 13:49:00.616067 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb_7091c8b9-67ad-488b-b8f1-3c24875d9436/util/0.log" Jan 27 13:49:00 crc kubenswrapper[4786]: I0127 13:49:00.670299 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb_7091c8b9-67ad-488b-b8f1-3c24875d9436/pull/0.log" Jan 27 13:49:00 crc kubenswrapper[4786]: I0127 13:49:00.678877 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb_7091c8b9-67ad-488b-b8f1-3c24875d9436/pull/0.log" Jan 27 13:49:00 crc kubenswrapper[4786]: I0127 13:49:00.848675 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb_7091c8b9-67ad-488b-b8f1-3c24875d9436/util/0.log" Jan 27 13:49:00 crc kubenswrapper[4786]: I0127 13:49:00.885553 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb_7091c8b9-67ad-488b-b8f1-3c24875d9436/extract/0.log" Jan 27 13:49:00 crc kubenswrapper[4786]: I0127 13:49:00.885905 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_42ef84609808c573a527c95ede8ab921d64aad4376098e560e058002a0gqzqb_7091c8b9-67ad-488b-b8f1-3c24875d9436/pull/0.log" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.027436 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p_3f10db36-4147-4749-8356-334a343efd90/util/0.log" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.210752 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p_3f10db36-4147-4749-8356-334a343efd90/pull/0.log" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.214343 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p_3f10db36-4147-4749-8356-334a343efd90/util/0.log" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.222617 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p_3f10db36-4147-4749-8356-334a343efd90/pull/0.log" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.360076 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p_3f10db36-4147-4749-8356-334a343efd90/util/0.log" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.397015 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p_3f10db36-4147-4749-8356-334a343efd90/pull/0.log" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.409338 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_85eb71024025e4cc66b960472c0fb0311115e3bc192ae016fd46dabe45pgb2p_3f10db36-4147-4749-8356-334a343efd90/extract/0.log" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.540866 4786 scope.go:117] "RemoveContainer" containerID="5ff06bfc40717dc3741a9a264d3fd7980cfdf0c5ac710a0c00909104d0f9cd87" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.541213 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-5mpdp_a5ebc5e9-fc27-4326-927a-c791f35a71e9/manager/0.log" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.561036 4786 scope.go:117] "RemoveContainer" containerID="48030f2aedc0ff3b2765828ac7f449d0ddbfe897d6c3e980b2600baa24d51f54" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.600578 4786 scope.go:117] "RemoveContainer" containerID="4abaa6688014c3919443064c4f77ae8bda6701f8a84ab8cf7e192e3959245d65" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.676495 4786 scope.go:117] "RemoveContainer" containerID="8c5267e49b9b768500c27856362f92a8565baa189b728b6e4b567294c1488f13" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.692727 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-cpn89_37986633-7f55-41aa-b83d-7f74a5640f2f/manager/0.log" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.705922 4786 scope.go:117] "RemoveContainer" containerID="95118eaea21b23cbce991522180f5bd3db8aaf8d3acb9f85d684091a40d8bc0c" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.733263 4786 scope.go:117] "RemoveContainer" containerID="f17dad222062c6569f0aa175518443806eeaf02aa92feb3087596a3540fb614d" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.775649 4786 scope.go:117] "RemoveContainer" containerID="ccda72274c49a1399c44f1d2aa6d70b6e4300bee6a2ad1515ee71f8a06269452" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.834971 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-w6r6x_84881c35-4d84-4c91-b401-0bf1d7de9314/manager/0.log" Jan 27 13:49:01 crc kubenswrapper[4786]: I0127 13:49:01.906269 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-z5xnb_4a240cd4-c49f-4716-80bb-6d1ba632a32c/manager/0.log" Jan 27 13:49:02 crc kubenswrapper[4786]: I0127 13:49:02.015016 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-fwlxc_e607a576-31c5-4ef7-82ea-66851b5a33d2/manager/0.log" Jan 27 13:49:02 crc kubenswrapper[4786]: I0127 13:49:02.127183 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wnmcp_7e2710fc-7453-40a9-81c0-ccec15d86a77/manager/0.log" Jan 27 13:49:02 crc kubenswrapper[4786]: I0127 13:49:02.328885 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-xdrln_43d55b3f-3bd8-4083-9e0d-f398938a47e6/manager/0.log" Jan 27 13:49:02 crc kubenswrapper[4786]: I0127 13:49:02.330678 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-lspl5_cfc6eb47-18a2-442a-a1d8-ddec61462156/manager/0.log" Jan 27 13:49:02 crc kubenswrapper[4786]: I0127 13:49:02.793813 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-6p8s9_5614e239-8fc3-4091-aad4-55a217ca1092/manager/0.log" Jan 27 13:49:02 crc kubenswrapper[4786]: I0127 13:49:02.818136 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-c2c8q_4058f919-d5c7-4f73-9c8a-432409f9022a/manager/0.log" Jan 27 13:49:02 crc kubenswrapper[4786]: I0127 13:49:02.944261 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-5zb2z_88c4fa6a-bb1a-46fe-a863-473b9ec66ce7/manager/0.log" Jan 27 13:49:03 crc kubenswrapper[4786]: I0127 13:49:03.031370 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-m75zn_1eed4ae8-357f-4388-a9d3-9382b0fc84ec/manager/0.log" Jan 27 13:49:03 crc kubenswrapper[4786]: I0127 13:49:03.268935 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-pjw9n_198411ea-9abf-4fe2-b7bb-95be72d0aa84/registry-server/0.log" Jan 27 13:49:03 crc kubenswrapper[4786]: I0127 13:49:03.454261 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-754b45c6dd-fws6l_e0d981b7-9481-4f08-a283-a274d47087f9/manager/0.log" Jan 27 13:49:03 crc kubenswrapper[4786]: I0127 13:49:03.467034 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:49:03 crc kubenswrapper[4786]: E0127 13:49:03.467301 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:49:03 crc kubenswrapper[4786]: I0127 13:49:03.551816 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-pb5lk_5c18c2c6-04e6-4b87-b92d-586823b20ac1/manager/0.log" Jan 27 13:49:03 crc kubenswrapper[4786]: I0127 13:49:03.624119 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8546q8kt_45599804-69cd-44f0-bb76-a15e5a3ff700/manager/0.log" Jan 27 13:49:03 crc kubenswrapper[4786]: I0127 13:49:03.875765 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7mc6w_e525b58f-7304-40e1-9fdc-949f43bb2cba/registry-server/0.log" Jan 27 13:49:04 crc kubenswrapper[4786]: I0127 13:49:04.019261 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-596b94879d-2gz6v_d7f033ce-43f9-425f-a74c-65735b66f5b8/manager/0.log" Jan 27 13:49:04 crc kubenswrapper[4786]: I0127 13:49:04.058594 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-cv96v_42803b12-da36-48df-b9bb-ed3d4555b7b4/manager/0.log" Jan 27 13:49:04 crc kubenswrapper[4786]: I0127 13:49:04.223748 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-ghr9t_250d0fff-09d2-49be-94a3-6eefdd3aab06/manager/0.log" Jan 27 13:49:04 crc kubenswrapper[4786]: I0127 13:49:04.274658 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-km9zd_fd3a1177-720b-4e0d-83d9-9ea046369690/operator/0.log" Jan 27 13:49:04 crc kubenswrapper[4786]: I0127 13:49:04.429944 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-qmp6d_9d2d7f2c-4522-45bf-a12d-1eb7cc11041e/manager/0.log" Jan 27 13:49:04 crc kubenswrapper[4786]: I0127 13:49:04.454314 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-fkbbc_39341414-eb82-400d-96ce-e546dd32d15b/manager/0.log" Jan 27 13:49:04 crc kubenswrapper[4786]: I0127 13:49:04.620666 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-8whbf_7732c732-60c5-476d-bf01-ed83c38b4d35/manager/0.log" Jan 27 13:49:04 crc kubenswrapper[4786]: I0127 13:49:04.728631 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-q7glc_f7bba046-60b2-4fa4-96ec-976f73b1ff7c/manager/0.log" Jan 27 13:49:13 crc kubenswrapper[4786]: I0127 13:49:13.033190 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj"] Jan 27 13:49:13 crc kubenswrapper[4786]: I0127 13:49:13.040897 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-ssjwj"] Jan 27 13:49:13 crc kubenswrapper[4786]: I0127 13:49:13.477245 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac61271-09f6-4a27-bb20-edf0cb037d72" path="/var/lib/kubelet/pods/fac61271-09f6-4a27-bb20-edf0cb037d72/volumes" Jan 27 13:49:14 crc kubenswrapper[4786]: I0127 13:49:14.028665 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7"] Jan 27 13:49:14 crc kubenswrapper[4786]: I0127 13:49:14.039195 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-j2ns7"] Jan 27 13:49:15 crc kubenswrapper[4786]: I0127 13:49:15.476466 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4df6489-80cb-45c8-90b2-7fd2e9bca103" path="/var/lib/kubelet/pods/e4df6489-80cb-45c8-90b2-7fd2e9bca103/volumes" Jan 27 13:49:16 crc kubenswrapper[4786]: I0127 13:49:16.465403 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:49:16 crc kubenswrapper[4786]: E0127 13:49:16.465891 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:49:25 crc kubenswrapper[4786]: I0127 13:49:25.114950 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-47c2z_4ac1751a-b5c6-46c7-a771-200f41805eea/control-plane-machine-set-operator/0.log" Jan 27 13:49:25 crc kubenswrapper[4786]: I0127 13:49:25.373512 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pgkx8_b015794a-bfb0-4118-8dae-8861a7ff6a03/machine-api-operator/0.log" Jan 27 13:49:25 crc kubenswrapper[4786]: I0127 13:49:25.406274 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pgkx8_b015794a-bfb0-4118-8dae-8861a7ff6a03/kube-rbac-proxy/0.log" Jan 27 13:49:29 crc kubenswrapper[4786]: I0127 13:49:29.465621 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:49:29 crc kubenswrapper[4786]: E0127 13:49:29.467647 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:49:32 crc kubenswrapper[4786]: I0127 13:49:32.040133 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9"] Jan 27 13:49:32 crc kubenswrapper[4786]: I0127 13:49:32.050670 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4snp9"] Jan 27 13:49:33 crc kubenswrapper[4786]: I0127 13:49:33.474620 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="198b59b6-ce67-44fa-bf96-4c080e830106" path="/var/lib/kubelet/pods/198b59b6-ce67-44fa-bf96-4c080e830106/volumes" Jan 27 13:49:38 crc kubenswrapper[4786]: I0127 13:49:38.287183 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-nswrf_e1729a70-d007-4211-8c66-58d58ada9764/cert-manager-controller/0.log" Jan 27 13:49:38 crc kubenswrapper[4786]: I0127 13:49:38.534769 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-x6g5c_1b1e34b6-0b50-4461-821c-64f3cafd6d69/cert-manager-cainjector/0.log" Jan 27 13:49:38 crc kubenswrapper[4786]: I0127 13:49:38.585131 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-ddztq_e346953b-9953-4381-8ec7-72958174f6d3/cert-manager-webhook/0.log" Jan 27 13:49:42 crc kubenswrapper[4786]: I0127 13:49:42.464582 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:49:42 crc kubenswrapper[4786]: E0127 13:49:42.465152 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:49:51 crc kubenswrapper[4786]: I0127 13:49:51.674959 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-55mnf_67f83441-1412-437c-9cb1-c38ee7b70182/nmstate-console-plugin/0.log" Jan 27 13:49:51 crc kubenswrapper[4786]: I0127 13:49:51.858569 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-cjq4w_35fde08b-f604-431c-88a8-6fe254dc84aa/nmstate-handler/0.log" Jan 27 13:49:51 crc kubenswrapper[4786]: I0127 13:49:51.893764 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rh4rm_19a395d7-a1ec-4c10-83ad-3f195b89fadd/kube-rbac-proxy/0.log" Jan 27 13:49:51 crc kubenswrapper[4786]: I0127 13:49:51.951944 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-rh4rm_19a395d7-a1ec-4c10-83ad-3f195b89fadd/nmstate-metrics/0.log" Jan 27 13:49:52 crc kubenswrapper[4786]: I0127 13:49:52.088195 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-jzcj9_f81e4a8e-6374-4da8-a409-624cabf87029/nmstate-operator/0.log" Jan 27 13:49:52 crc kubenswrapper[4786]: I0127 13:49:52.172193 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-nvbzs_59020701-49b7-412d-97a9-81ef6a905bb0/nmstate-webhook/0.log" Jan 27 13:49:53 crc kubenswrapper[4786]: I0127 13:49:53.466029 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:49:53 crc kubenswrapper[4786]: E0127 13:49:53.466543 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:50:01 crc kubenswrapper[4786]: I0127 13:50:01.889149 4786 scope.go:117] "RemoveContainer" containerID="18977355d27f18d461cd7c746c65146ba54d727a09e9aa866c1b892de3a9b3d8" Jan 27 13:50:01 crc kubenswrapper[4786]: I0127 13:50:01.931724 4786 scope.go:117] "RemoveContainer" containerID="78034341a7536312ef1d85d81d892c553707c2d23f2eea20de173280f5ec812e" Jan 27 13:50:01 crc kubenswrapper[4786]: I0127 13:50:01.981743 4786 scope.go:117] "RemoveContainer" containerID="b149f864946bbb75733731eb4c8a2d8304720f30dcdbc16ab982d29d51d7f9c6" Jan 27 13:50:04 crc kubenswrapper[4786]: I0127 13:50:04.465360 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:50:04 crc kubenswrapper[4786]: E0127 13:50:04.465920 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7bxtk_openshift-machine-config-operator(2c6a2646-52f7-41be-8a81-3fed6eac75cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" Jan 27 13:50:18 crc kubenswrapper[4786]: I0127 13:50:18.107718 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-qsx4x_fb3825b5-f83e-4064-ac93-19ee9e441b42/kube-rbac-proxy/0.log" Jan 27 13:50:18 crc kubenswrapper[4786]: I0127 13:50:18.208691 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-qsx4x_fb3825b5-f83e-4064-ac93-19ee9e441b42/controller/0.log" Jan 27 13:50:18 crc kubenswrapper[4786]: I0127 13:50:18.324910 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/cp-frr-files/0.log" Jan 27 13:50:18 crc kubenswrapper[4786]: I0127 13:50:18.564524 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/cp-frr-files/0.log" Jan 27 13:50:18 crc kubenswrapper[4786]: I0127 13:50:18.571933 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/cp-metrics/0.log" Jan 27 13:50:18 crc kubenswrapper[4786]: I0127 13:50:18.582655 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/cp-reloader/0.log" Jan 27 13:50:18 crc kubenswrapper[4786]: I0127 13:50:18.644435 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/cp-reloader/0.log" Jan 27 13:50:18 crc kubenswrapper[4786]: I0127 13:50:18.839809 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/cp-frr-files/0.log" Jan 27 13:50:18 crc kubenswrapper[4786]: I0127 13:50:18.845847 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/cp-reloader/0.log" Jan 27 13:50:18 crc kubenswrapper[4786]: I0127 13:50:18.862056 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/cp-metrics/0.log" Jan 27 13:50:18 crc kubenswrapper[4786]: I0127 13:50:18.872480 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/cp-metrics/0.log" Jan 27 13:50:19 crc kubenswrapper[4786]: I0127 13:50:19.058171 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/cp-metrics/0.log" Jan 27 13:50:19 crc kubenswrapper[4786]: I0127 13:50:19.086903 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/cp-reloader/0.log" Jan 27 13:50:19 crc kubenswrapper[4786]: I0127 13:50:19.094316 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/cp-frr-files/0.log" Jan 27 13:50:19 crc kubenswrapper[4786]: I0127 13:50:19.118537 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/controller/0.log" Jan 27 13:50:19 crc kubenswrapper[4786]: I0127 13:50:19.259983 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/kube-rbac-proxy/0.log" Jan 27 13:50:19 crc kubenswrapper[4786]: I0127 13:50:19.271456 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/frr-metrics/0.log" Jan 27 13:50:19 crc kubenswrapper[4786]: I0127 13:50:19.318557 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/kube-rbac-proxy-frr/0.log" Jan 27 13:50:19 crc kubenswrapper[4786]: I0127 13:50:19.441888 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/reloader/0.log" Jan 27 13:50:19 crc kubenswrapper[4786]: I0127 13:50:19.464829 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:50:19 crc kubenswrapper[4786]: I0127 13:50:19.544020 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-f8lw7_0ffbeb23-d6ac-4a28-81a5-052f7c2c8618/frr-k8s-webhook-server/0.log" Jan 27 13:50:19 crc kubenswrapper[4786]: I0127 13:50:19.744055 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerStarted","Data":"01dc83e541cd8f5738a00fe3145a7db1bf1be4f5116690254fc7c5d1c4fc7db1"} Jan 27 13:50:19 crc kubenswrapper[4786]: I0127 13:50:19.830808 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-544859b7c7-wsbph_0f481967-0c63-4c37-9daf-5da5dd5508fd/manager/0.log" Jan 27 13:50:19 crc kubenswrapper[4786]: I0127 13:50:19.930395 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-79f6764f84-kjh6v_84b09bb0-a364-469e-9302-c3582b359791/webhook-server/0.log" Jan 27 13:50:20 crc kubenswrapper[4786]: I0127 13:50:20.103495 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-92hjt_987590ac-66b7-4ab0-8c6b-b72bbd04bab2/kube-rbac-proxy/0.log" Jan 27 13:50:20 crc kubenswrapper[4786]: I0127 13:50:20.536894 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-92hjt_987590ac-66b7-4ab0-8c6b-b72bbd04bab2/speaker/0.log" Jan 27 13:50:20 crc kubenswrapper[4786]: I0127 13:50:20.631621 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwqdq_0a3d249e-e994-4e5d-9970-04c4977f28c9/frr/0.log" Jan 27 13:50:37 crc kubenswrapper[4786]: I0127 13:50:37.263450 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-8567ddf8f4-cxtk8_3c6e6e93-d5d9-4b2c-a285-8fd57f9994eb/keystone-api/0.log" Jan 27 13:50:37 crc kubenswrapper[4786]: I0127 13:50:37.692071 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_c7e0ce90-fda8-4074-b753-0df1531d7fcc/nova-kuttl-api-api/0.log" Jan 27 13:50:37 crc kubenswrapper[4786]: I0127 13:50:37.833674 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_c7e0ce90-fda8-4074-b753-0df1531d7fcc/nova-kuttl-api-log/0.log" Jan 27 13:50:38 crc kubenswrapper[4786]: I0127 13:50:38.094885 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-0_10c3eb0f-3265-4520-afd7-0e002bcc5b81/nova-kuttl-cell0-conductor-conductor/0.log" Jan 27 13:50:38 crc kubenswrapper[4786]: I0127 13:50:38.211318 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-0_616fd9dd-c4dc-45a7-ab66-358fc07acea0/nova-kuttl-cell1-conductor-conductor/0.log" Jan 27 13:50:38 crc kubenswrapper[4786]: I0127 13:50:38.376682 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-novncproxy-0_0a40bfc8-a365-4329-8362-ecd8b784f52d/nova-kuttl-cell1-novncproxy-novncproxy/0.log" Jan 27 13:50:38 crc kubenswrapper[4786]: I0127 13:50:38.634926 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_1d7007af-1e26-4b89-a761-5921086ff009/nova-kuttl-metadata-log/0.log" Jan 27 13:50:38 crc kubenswrapper[4786]: I0127 13:50:38.642121 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_1d7007af-1e26-4b89-a761-5921086ff009/nova-kuttl-metadata-metadata/0.log" Jan 27 13:50:38 crc kubenswrapper[4786]: I0127 13:50:38.917851 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_7a4e6dad-e854-4ecd-9441-04e72893ea29/mysql-bootstrap/0.log" Jan 27 13:50:39 crc kubenswrapper[4786]: I0127 13:50:39.080269 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-scheduler-0_4d2d365a-46c9-4f47-9501-654446cbd40d/nova-kuttl-scheduler-scheduler/0.log" Jan 27 13:50:39 crc kubenswrapper[4786]: I0127 13:50:39.219539 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_7a4e6dad-e854-4ecd-9441-04e72893ea29/mysql-bootstrap/0.log" Jan 27 13:50:39 crc kubenswrapper[4786]: I0127 13:50:39.236799 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_7a4e6dad-e854-4ecd-9441-04e72893ea29/galera/0.log" Jan 27 13:50:39 crc kubenswrapper[4786]: I0127 13:50:39.500469 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_d5e1220e-a41a-4e46-890f-3502e548bf66/mysql-bootstrap/0.log" Jan 27 13:50:39 crc kubenswrapper[4786]: I0127 13:50:39.658505 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_d5e1220e-a41a-4e46-890f-3502e548bf66/mysql-bootstrap/0.log" Jan 27 13:50:39 crc kubenswrapper[4786]: I0127 13:50:39.714355 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_d5e1220e-a41a-4e46-890f-3502e548bf66/galera/0.log" Jan 27 13:50:39 crc kubenswrapper[4786]: I0127 13:50:39.725808 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_a011c0d3-4039-465f-9ea6-acad60c397dd/memcached/0.log" Jan 27 13:50:39 crc kubenswrapper[4786]: I0127 13:50:39.912859 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_ebe93b02-f04c-48d1-8f5f-68e113379180/openstackclient/0.log" Jan 27 13:50:39 crc kubenswrapper[4786]: I0127 13:50:39.951793 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-57fbd5dfd8-mlllb_9e2844b5-b7ec-43fd-873a-6cdaa879c676/placement-api/0.log" Jan 27 13:50:40 crc kubenswrapper[4786]: I0127 13:50:40.111025 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_f76eacb2-75ca-46c4-badb-b1404b018bf6/setup-container/0.log" Jan 27 13:50:40 crc kubenswrapper[4786]: I0127 13:50:40.322630 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_f76eacb2-75ca-46c4-badb-b1404b018bf6/setup-container/0.log" Jan 27 13:50:40 crc kubenswrapper[4786]: I0127 13:50:40.372023 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_f76eacb2-75ca-46c4-badb-b1404b018bf6/rabbitmq/0.log" Jan 27 13:50:40 crc kubenswrapper[4786]: I0127 13:50:40.501913 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_9f2857cb-9399-4563-b68e-3b51cbd47f80/setup-container/0.log" Jan 27 13:50:40 crc kubenswrapper[4786]: I0127 13:50:40.583059 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-57fbd5dfd8-mlllb_9e2844b5-b7ec-43fd-873a-6cdaa879c676/placement-log/0.log" Jan 27 13:50:40 crc kubenswrapper[4786]: I0127 13:50:40.710880 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_9f2857cb-9399-4563-b68e-3b51cbd47f80/setup-container/0.log" Jan 27 13:50:40 crc kubenswrapper[4786]: I0127 13:50:40.733625 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_9f2857cb-9399-4563-b68e-3b51cbd47f80/rabbitmq/0.log" Jan 27 13:50:40 crc kubenswrapper[4786]: I0127 13:50:40.808982 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_c8472c3b-b877-4e6c-992f-f4146f81e3fc/setup-container/0.log" Jan 27 13:50:40 crc kubenswrapper[4786]: I0127 13:50:40.942087 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_c8472c3b-b877-4e6c-992f-f4146f81e3fc/setup-container/0.log" Jan 27 13:50:40 crc kubenswrapper[4786]: I0127 13:50:40.970884 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_c8472c3b-b877-4e6c-992f-f4146f81e3fc/rabbitmq/0.log" Jan 27 13:50:54 crc kubenswrapper[4786]: I0127 13:50:54.060377 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49_f397bffc-b155-4fbd-896c-82c4a3d83f3e/util/0.log" Jan 27 13:50:54 crc kubenswrapper[4786]: I0127 13:50:54.270421 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49_f397bffc-b155-4fbd-896c-82c4a3d83f3e/util/0.log" Jan 27 13:50:54 crc kubenswrapper[4786]: I0127 13:50:54.287736 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49_f397bffc-b155-4fbd-896c-82c4a3d83f3e/pull/0.log" Jan 27 13:50:54 crc kubenswrapper[4786]: I0127 13:50:54.299749 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49_f397bffc-b155-4fbd-896c-82c4a3d83f3e/pull/0.log" Jan 27 13:50:54 crc kubenswrapper[4786]: I0127 13:50:54.473940 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49_f397bffc-b155-4fbd-896c-82c4a3d83f3e/util/0.log" Jan 27 13:50:54 crc kubenswrapper[4786]: I0127 13:50:54.475570 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49_f397bffc-b155-4fbd-896c-82c4a3d83f3e/pull/0.log" Jan 27 13:50:54 crc kubenswrapper[4786]: I0127 13:50:54.488462 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a62l49_f397bffc-b155-4fbd-896c-82c4a3d83f3e/extract/0.log" Jan 27 13:50:54 crc kubenswrapper[4786]: I0127 13:50:54.642459 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb_2cb03dff-5d00-4c73-a067-16b5513602a9/util/0.log" Jan 27 13:50:54 crc kubenswrapper[4786]: I0127 13:50:54.831132 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb_2cb03dff-5d00-4c73-a067-16b5513602a9/pull/0.log" Jan 27 13:50:54 crc kubenswrapper[4786]: I0127 13:50:54.877983 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb_2cb03dff-5d00-4c73-a067-16b5513602a9/util/0.log" Jan 27 13:50:54 crc kubenswrapper[4786]: I0127 13:50:54.887194 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb_2cb03dff-5d00-4c73-a067-16b5513602a9/pull/0.log" Jan 27 13:50:55 crc kubenswrapper[4786]: I0127 13:50:55.033753 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb_2cb03dff-5d00-4c73-a067-16b5513602a9/util/0.log" Jan 27 13:50:55 crc kubenswrapper[4786]: I0127 13:50:55.061282 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb_2cb03dff-5d00-4c73-a067-16b5513602a9/pull/0.log" Jan 27 13:50:55 crc kubenswrapper[4786]: I0127 13:50:55.108720 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc42nbb_2cb03dff-5d00-4c73-a067-16b5513602a9/extract/0.log" Jan 27 13:50:55 crc kubenswrapper[4786]: I0127 13:50:55.223486 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs_d2d928ff-1d55-488f-92a8-9f2e8efd62f8/util/0.log" Jan 27 13:50:55 crc kubenswrapper[4786]: I0127 13:50:55.457741 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs_d2d928ff-1d55-488f-92a8-9f2e8efd62f8/pull/0.log" Jan 27 13:50:55 crc kubenswrapper[4786]: I0127 13:50:55.457983 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs_d2d928ff-1d55-488f-92a8-9f2e8efd62f8/util/0.log" Jan 27 13:50:55 crc kubenswrapper[4786]: I0127 13:50:55.495817 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs_d2d928ff-1d55-488f-92a8-9f2e8efd62f8/pull/0.log" Jan 27 13:50:55 crc kubenswrapper[4786]: I0127 13:50:55.695832 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs_d2d928ff-1d55-488f-92a8-9f2e8efd62f8/util/0.log" Jan 27 13:50:55 crc kubenswrapper[4786]: I0127 13:50:55.730055 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs_d2d928ff-1d55-488f-92a8-9f2e8efd62f8/pull/0.log" Jan 27 13:50:55 crc kubenswrapper[4786]: I0127 13:50:55.731181 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713svsrs_d2d928ff-1d55-488f-92a8-9f2e8efd62f8/extract/0.log" Jan 27 13:50:55 crc kubenswrapper[4786]: I0127 13:50:55.883590 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pn27n_b483450a-cb02-4df9-8e5a-640c6b21f731/extract-utilities/0.log" Jan 27 13:50:56 crc kubenswrapper[4786]: I0127 13:50:56.070450 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pn27n_b483450a-cb02-4df9-8e5a-640c6b21f731/extract-content/0.log" Jan 27 13:50:56 crc kubenswrapper[4786]: I0127 13:50:56.071384 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pn27n_b483450a-cb02-4df9-8e5a-640c6b21f731/extract-content/0.log" Jan 27 13:50:56 crc kubenswrapper[4786]: I0127 13:50:56.091264 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pn27n_b483450a-cb02-4df9-8e5a-640c6b21f731/extract-utilities/0.log" Jan 27 13:50:56 crc kubenswrapper[4786]: I0127 13:50:56.266059 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pn27n_b483450a-cb02-4df9-8e5a-640c6b21f731/extract-utilities/0.log" Jan 27 13:50:56 crc kubenswrapper[4786]: I0127 13:50:56.272878 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pn27n_b483450a-cb02-4df9-8e5a-640c6b21f731/extract-content/0.log" Jan 27 13:50:56 crc kubenswrapper[4786]: I0127 13:50:56.518315 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-64c65_84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2/extract-utilities/0.log" Jan 27 13:50:56 crc kubenswrapper[4786]: I0127 13:50:56.525130 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pn27n_b483450a-cb02-4df9-8e5a-640c6b21f731/registry-server/0.log" Jan 27 13:50:56 crc kubenswrapper[4786]: I0127 13:50:56.682847 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-64c65_84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2/extract-content/0.log" Jan 27 13:50:56 crc kubenswrapper[4786]: I0127 13:50:56.684628 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-64c65_84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2/extract-utilities/0.log" Jan 27 13:50:56 crc kubenswrapper[4786]: I0127 13:50:56.727153 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-64c65_84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2/extract-content/0.log" Jan 27 13:50:56 crc kubenswrapper[4786]: I0127 13:50:56.934234 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-64c65_84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2/extract-utilities/0.log" Jan 27 13:50:56 crc kubenswrapper[4786]: I0127 13:50:56.972327 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-64c65_84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2/extract-content/0.log" Jan 27 13:50:57 crc kubenswrapper[4786]: I0127 13:50:57.162659 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-245sv_d9ebf2ac-8724-4914-be74-cae8c48760d8/marketplace-operator/0.log" Jan 27 13:50:57 crc kubenswrapper[4786]: I0127 13:50:57.341494 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jsgtz_74e8e0b9-48df-45eb-a0dd-72271431991c/extract-utilities/0.log" Jan 27 13:50:57 crc kubenswrapper[4786]: I0127 13:50:57.535571 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-64c65_84df5426-1bd1-4e68-bf2b-a3e7ef1fd9a2/registry-server/0.log" Jan 27 13:50:57 crc kubenswrapper[4786]: I0127 13:50:57.600259 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jsgtz_74e8e0b9-48df-45eb-a0dd-72271431991c/extract-content/0.log" Jan 27 13:50:57 crc kubenswrapper[4786]: I0127 13:50:57.629564 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jsgtz_74e8e0b9-48df-45eb-a0dd-72271431991c/extract-utilities/0.log" Jan 27 13:50:57 crc kubenswrapper[4786]: I0127 13:50:57.630088 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jsgtz_74e8e0b9-48df-45eb-a0dd-72271431991c/extract-content/0.log" Jan 27 13:50:57 crc kubenswrapper[4786]: I0127 13:50:57.825855 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jsgtz_74e8e0b9-48df-45eb-a0dd-72271431991c/extract-utilities/0.log" Jan 27 13:50:57 crc kubenswrapper[4786]: I0127 13:50:57.832054 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jsgtz_74e8e0b9-48df-45eb-a0dd-72271431991c/extract-content/0.log" Jan 27 13:50:58 crc kubenswrapper[4786]: I0127 13:50:58.002912 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jsgtz_74e8e0b9-48df-45eb-a0dd-72271431991c/registry-server/0.log" Jan 27 13:50:58 crc kubenswrapper[4786]: I0127 13:50:58.071498 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvj57_c1abdef6-52c6-4c82-b750-a46910bbb108/extract-utilities/0.log" Jan 27 13:50:58 crc kubenswrapper[4786]: I0127 13:50:58.205724 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvj57_c1abdef6-52c6-4c82-b750-a46910bbb108/extract-utilities/0.log" Jan 27 13:50:58 crc kubenswrapper[4786]: I0127 13:50:58.224063 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvj57_c1abdef6-52c6-4c82-b750-a46910bbb108/extract-content/0.log" Jan 27 13:50:58 crc kubenswrapper[4786]: I0127 13:50:58.266877 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvj57_c1abdef6-52c6-4c82-b750-a46910bbb108/extract-content/0.log" Jan 27 13:50:58 crc kubenswrapper[4786]: I0127 13:50:58.407034 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvj57_c1abdef6-52c6-4c82-b750-a46910bbb108/extract-utilities/0.log" Jan 27 13:50:58 crc kubenswrapper[4786]: I0127 13:50:58.459807 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvj57_c1abdef6-52c6-4c82-b750-a46910bbb108/extract-content/0.log" Jan 27 13:50:59 crc kubenswrapper[4786]: I0127 13:50:59.004760 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xvj57_c1abdef6-52c6-4c82-b750-a46910bbb108/registry-server/0.log" Jan 27 13:51:06 crc kubenswrapper[4786]: I0127 13:51:06.912268 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mhzmn"] Jan 27 13:51:06 crc kubenswrapper[4786]: I0127 13:51:06.916077 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:06 crc kubenswrapper[4786]: I0127 13:51:06.939520 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mhzmn"] Jan 27 13:51:07 crc kubenswrapper[4786]: I0127 13:51:07.052409 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-catalog-content\") pod \"redhat-operators-mhzmn\" (UID: \"558d0d6f-49ea-47d7-b4f1-c8aa2c439823\") " pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:07 crc kubenswrapper[4786]: I0127 13:51:07.052534 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-utilities\") pod \"redhat-operators-mhzmn\" (UID: \"558d0d6f-49ea-47d7-b4f1-c8aa2c439823\") " pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:07 crc kubenswrapper[4786]: I0127 13:51:07.052573 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrkjv\" (UniqueName: \"kubernetes.io/projected/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-kube-api-access-nrkjv\") pod \"redhat-operators-mhzmn\" (UID: \"558d0d6f-49ea-47d7-b4f1-c8aa2c439823\") " pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:07 crc kubenswrapper[4786]: I0127 13:51:07.153748 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrkjv\" (UniqueName: \"kubernetes.io/projected/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-kube-api-access-nrkjv\") pod \"redhat-operators-mhzmn\" (UID: \"558d0d6f-49ea-47d7-b4f1-c8aa2c439823\") " pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:07 crc kubenswrapper[4786]: I0127 13:51:07.154091 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-catalog-content\") pod \"redhat-operators-mhzmn\" (UID: \"558d0d6f-49ea-47d7-b4f1-c8aa2c439823\") " pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:07 crc kubenswrapper[4786]: I0127 13:51:07.154209 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-utilities\") pod \"redhat-operators-mhzmn\" (UID: \"558d0d6f-49ea-47d7-b4f1-c8aa2c439823\") " pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:07 crc kubenswrapper[4786]: I0127 13:51:07.154566 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-catalog-content\") pod \"redhat-operators-mhzmn\" (UID: \"558d0d6f-49ea-47d7-b4f1-c8aa2c439823\") " pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:07 crc kubenswrapper[4786]: I0127 13:51:07.154639 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-utilities\") pod \"redhat-operators-mhzmn\" (UID: \"558d0d6f-49ea-47d7-b4f1-c8aa2c439823\") " pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:07 crc kubenswrapper[4786]: I0127 13:51:07.179528 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrkjv\" (UniqueName: \"kubernetes.io/projected/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-kube-api-access-nrkjv\") pod \"redhat-operators-mhzmn\" (UID: \"558d0d6f-49ea-47d7-b4f1-c8aa2c439823\") " pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:07 crc kubenswrapper[4786]: I0127 13:51:07.263876 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:07 crc kubenswrapper[4786]: I0127 13:51:07.759580 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mhzmn"] Jan 27 13:51:08 crc kubenswrapper[4786]: I0127 13:51:08.085780 4786 generic.go:334] "Generic (PLEG): container finished" podID="558d0d6f-49ea-47d7-b4f1-c8aa2c439823" containerID="81980dcd08e71deeccb142cb8c1068b83ae3e1893d4a53c26d143cd142fed4ae" exitCode=0 Jan 27 13:51:08 crc kubenswrapper[4786]: I0127 13:51:08.085837 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhzmn" event={"ID":"558d0d6f-49ea-47d7-b4f1-c8aa2c439823","Type":"ContainerDied","Data":"81980dcd08e71deeccb142cb8c1068b83ae3e1893d4a53c26d143cd142fed4ae"} Jan 27 13:51:08 crc kubenswrapper[4786]: I0127 13:51:08.085869 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhzmn" event={"ID":"558d0d6f-49ea-47d7-b4f1-c8aa2c439823","Type":"ContainerStarted","Data":"e036872e6c58c9e8d33cd26c525dd21de2cceb7684c317194f3fdf57e8dd6d93"} Jan 27 13:51:08 crc kubenswrapper[4786]: I0127 13:51:08.088077 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 13:51:09 crc kubenswrapper[4786]: I0127 13:51:09.098399 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhzmn" event={"ID":"558d0d6f-49ea-47d7-b4f1-c8aa2c439823","Type":"ContainerStarted","Data":"9d0a49e5b259a4c05db265adce56d3204276a06de2549800a54cc05f1cda9b24"} Jan 27 13:51:10 crc kubenswrapper[4786]: I0127 13:51:10.109369 4786 generic.go:334] "Generic (PLEG): container finished" podID="558d0d6f-49ea-47d7-b4f1-c8aa2c439823" containerID="9d0a49e5b259a4c05db265adce56d3204276a06de2549800a54cc05f1cda9b24" exitCode=0 Jan 27 13:51:10 crc kubenswrapper[4786]: I0127 13:51:10.109425 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhzmn" event={"ID":"558d0d6f-49ea-47d7-b4f1-c8aa2c439823","Type":"ContainerDied","Data":"9d0a49e5b259a4c05db265adce56d3204276a06de2549800a54cc05f1cda9b24"} Jan 27 13:51:11 crc kubenswrapper[4786]: I0127 13:51:11.130461 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhzmn" event={"ID":"558d0d6f-49ea-47d7-b4f1-c8aa2c439823","Type":"ContainerStarted","Data":"e825a0e38fec0c2e408bb20bee4a5cbd23640adb8f7a53d1dd01d1bf81c38f15"} Jan 27 13:51:11 crc kubenswrapper[4786]: I0127 13:51:11.152882 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mhzmn" podStartSLOduration=2.715369958 podStartE2EDuration="5.152864253s" podCreationTimestamp="2026-01-27 13:51:06 +0000 UTC" firstStartedPulling="2026-01-27 13:51:08.08778329 +0000 UTC m=+2651.298397409" lastFinishedPulling="2026-01-27 13:51:10.525277595 +0000 UTC m=+2653.735891704" observedRunningTime="2026-01-27 13:51:11.147150957 +0000 UTC m=+2654.357765076" watchObservedRunningTime="2026-01-27 13:51:11.152864253 +0000 UTC m=+2654.363478372" Jan 27 13:51:17 crc kubenswrapper[4786]: I0127 13:51:17.264687 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:17 crc kubenswrapper[4786]: I0127 13:51:17.265231 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:17 crc kubenswrapper[4786]: I0127 13:51:17.313598 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:18 crc kubenswrapper[4786]: I0127 13:51:18.224335 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:18 crc kubenswrapper[4786]: I0127 13:51:18.273734 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mhzmn"] Jan 27 13:51:20 crc kubenswrapper[4786]: I0127 13:51:20.195156 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mhzmn" podUID="558d0d6f-49ea-47d7-b4f1-c8aa2c439823" containerName="registry-server" containerID="cri-o://e825a0e38fec0c2e408bb20bee4a5cbd23640adb8f7a53d1dd01d1bf81c38f15" gracePeriod=2 Jan 27 13:51:20 crc kubenswrapper[4786]: I0127 13:51:20.703141 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:20 crc kubenswrapper[4786]: I0127 13:51:20.788220 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-utilities\") pod \"558d0d6f-49ea-47d7-b4f1-c8aa2c439823\" (UID: \"558d0d6f-49ea-47d7-b4f1-c8aa2c439823\") " Jan 27 13:51:20 crc kubenswrapper[4786]: I0127 13:51:20.788340 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrkjv\" (UniqueName: \"kubernetes.io/projected/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-kube-api-access-nrkjv\") pod \"558d0d6f-49ea-47d7-b4f1-c8aa2c439823\" (UID: \"558d0d6f-49ea-47d7-b4f1-c8aa2c439823\") " Jan 27 13:51:20 crc kubenswrapper[4786]: I0127 13:51:20.788506 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-catalog-content\") pod \"558d0d6f-49ea-47d7-b4f1-c8aa2c439823\" (UID: \"558d0d6f-49ea-47d7-b4f1-c8aa2c439823\") " Jan 27 13:51:20 crc kubenswrapper[4786]: I0127 13:51:20.790463 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-utilities" (OuterVolumeSpecName: "utilities") pod "558d0d6f-49ea-47d7-b4f1-c8aa2c439823" (UID: "558d0d6f-49ea-47d7-b4f1-c8aa2c439823"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:51:20 crc kubenswrapper[4786]: I0127 13:51:20.796393 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-kube-api-access-nrkjv" (OuterVolumeSpecName: "kube-api-access-nrkjv") pod "558d0d6f-49ea-47d7-b4f1-c8aa2c439823" (UID: "558d0d6f-49ea-47d7-b4f1-c8aa2c439823"). InnerVolumeSpecName "kube-api-access-nrkjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:51:20 crc kubenswrapper[4786]: I0127 13:51:20.891008 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:20 crc kubenswrapper[4786]: I0127 13:51:20.891044 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrkjv\" (UniqueName: \"kubernetes.io/projected/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-kube-api-access-nrkjv\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:20 crc kubenswrapper[4786]: I0127 13:51:20.912332 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "558d0d6f-49ea-47d7-b4f1-c8aa2c439823" (UID: "558d0d6f-49ea-47d7-b4f1-c8aa2c439823"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:51:20 crc kubenswrapper[4786]: I0127 13:51:20.998964 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558d0d6f-49ea-47d7-b4f1-c8aa2c439823-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:51:21 crc kubenswrapper[4786]: I0127 13:51:21.207400 4786 generic.go:334] "Generic (PLEG): container finished" podID="558d0d6f-49ea-47d7-b4f1-c8aa2c439823" containerID="e825a0e38fec0c2e408bb20bee4a5cbd23640adb8f7a53d1dd01d1bf81c38f15" exitCode=0 Jan 27 13:51:21 crc kubenswrapper[4786]: I0127 13:51:21.207456 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhzmn" event={"ID":"558d0d6f-49ea-47d7-b4f1-c8aa2c439823","Type":"ContainerDied","Data":"e825a0e38fec0c2e408bb20bee4a5cbd23640adb8f7a53d1dd01d1bf81c38f15"} Jan 27 13:51:21 crc kubenswrapper[4786]: I0127 13:51:21.207502 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhzmn" Jan 27 13:51:21 crc kubenswrapper[4786]: I0127 13:51:21.207520 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhzmn" event={"ID":"558d0d6f-49ea-47d7-b4f1-c8aa2c439823","Type":"ContainerDied","Data":"e036872e6c58c9e8d33cd26c525dd21de2cceb7684c317194f3fdf57e8dd6d93"} Jan 27 13:51:21 crc kubenswrapper[4786]: I0127 13:51:21.207554 4786 scope.go:117] "RemoveContainer" containerID="e825a0e38fec0c2e408bb20bee4a5cbd23640adb8f7a53d1dd01d1bf81c38f15" Jan 27 13:51:21 crc kubenswrapper[4786]: I0127 13:51:21.231937 4786 scope.go:117] "RemoveContainer" containerID="9d0a49e5b259a4c05db265adce56d3204276a06de2549800a54cc05f1cda9b24" Jan 27 13:51:21 crc kubenswrapper[4786]: I0127 13:51:21.255998 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mhzmn"] Jan 27 13:51:21 crc kubenswrapper[4786]: I0127 13:51:21.263535 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mhzmn"] Jan 27 13:51:21 crc kubenswrapper[4786]: I0127 13:51:21.267184 4786 scope.go:117] "RemoveContainer" containerID="81980dcd08e71deeccb142cb8c1068b83ae3e1893d4a53c26d143cd142fed4ae" Jan 27 13:51:21 crc kubenswrapper[4786]: I0127 13:51:21.303912 4786 scope.go:117] "RemoveContainer" containerID="e825a0e38fec0c2e408bb20bee4a5cbd23640adb8f7a53d1dd01d1bf81c38f15" Jan 27 13:51:21 crc kubenswrapper[4786]: E0127 13:51:21.304669 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e825a0e38fec0c2e408bb20bee4a5cbd23640adb8f7a53d1dd01d1bf81c38f15\": container with ID starting with e825a0e38fec0c2e408bb20bee4a5cbd23640adb8f7a53d1dd01d1bf81c38f15 not found: ID does not exist" containerID="e825a0e38fec0c2e408bb20bee4a5cbd23640adb8f7a53d1dd01d1bf81c38f15" Jan 27 13:51:21 crc kubenswrapper[4786]: I0127 13:51:21.304745 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e825a0e38fec0c2e408bb20bee4a5cbd23640adb8f7a53d1dd01d1bf81c38f15"} err="failed to get container status \"e825a0e38fec0c2e408bb20bee4a5cbd23640adb8f7a53d1dd01d1bf81c38f15\": rpc error: code = NotFound desc = could not find container \"e825a0e38fec0c2e408bb20bee4a5cbd23640adb8f7a53d1dd01d1bf81c38f15\": container with ID starting with e825a0e38fec0c2e408bb20bee4a5cbd23640adb8f7a53d1dd01d1bf81c38f15 not found: ID does not exist" Jan 27 13:51:21 crc kubenswrapper[4786]: I0127 13:51:21.304795 4786 scope.go:117] "RemoveContainer" containerID="9d0a49e5b259a4c05db265adce56d3204276a06de2549800a54cc05f1cda9b24" Jan 27 13:51:21 crc kubenswrapper[4786]: E0127 13:51:21.305362 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d0a49e5b259a4c05db265adce56d3204276a06de2549800a54cc05f1cda9b24\": container with ID starting with 9d0a49e5b259a4c05db265adce56d3204276a06de2549800a54cc05f1cda9b24 not found: ID does not exist" containerID="9d0a49e5b259a4c05db265adce56d3204276a06de2549800a54cc05f1cda9b24" Jan 27 13:51:21 crc kubenswrapper[4786]: I0127 13:51:21.305419 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0a49e5b259a4c05db265adce56d3204276a06de2549800a54cc05f1cda9b24"} err="failed to get container status \"9d0a49e5b259a4c05db265adce56d3204276a06de2549800a54cc05f1cda9b24\": rpc error: code = NotFound desc = could not find container \"9d0a49e5b259a4c05db265adce56d3204276a06de2549800a54cc05f1cda9b24\": container with ID starting with 9d0a49e5b259a4c05db265adce56d3204276a06de2549800a54cc05f1cda9b24 not found: ID does not exist" Jan 27 13:51:21 crc kubenswrapper[4786]: I0127 13:51:21.305435 4786 scope.go:117] "RemoveContainer" containerID="81980dcd08e71deeccb142cb8c1068b83ae3e1893d4a53c26d143cd142fed4ae" Jan 27 13:51:21 crc kubenswrapper[4786]: E0127 13:51:21.305747 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81980dcd08e71deeccb142cb8c1068b83ae3e1893d4a53c26d143cd142fed4ae\": container with ID starting with 81980dcd08e71deeccb142cb8c1068b83ae3e1893d4a53c26d143cd142fed4ae not found: ID does not exist" containerID="81980dcd08e71deeccb142cb8c1068b83ae3e1893d4a53c26d143cd142fed4ae" Jan 27 13:51:21 crc kubenswrapper[4786]: I0127 13:51:21.305768 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81980dcd08e71deeccb142cb8c1068b83ae3e1893d4a53c26d143cd142fed4ae"} err="failed to get container status \"81980dcd08e71deeccb142cb8c1068b83ae3e1893d4a53c26d143cd142fed4ae\": rpc error: code = NotFound desc = could not find container \"81980dcd08e71deeccb142cb8c1068b83ae3e1893d4a53c26d143cd142fed4ae\": container with ID starting with 81980dcd08e71deeccb142cb8c1068b83ae3e1893d4a53c26d143cd142fed4ae not found: ID does not exist" Jan 27 13:51:21 crc kubenswrapper[4786]: I0127 13:51:21.476087 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="558d0d6f-49ea-47d7-b4f1-c8aa2c439823" path="/var/lib/kubelet/pods/558d0d6f-49ea-47d7-b4f1-c8aa2c439823/volumes" Jan 27 13:52:21 crc kubenswrapper[4786]: I0127 13:52:21.672318 4786 generic.go:334] "Generic (PLEG): container finished" podID="e6aee19f-6886-40ca-a074-299f4806ca27" containerID="d54951db120c28c630914acfc63efe21e1153d2fad606b95b39d6d74c39a5e1c" exitCode=0 Jan 27 13:52:21 crc kubenswrapper[4786]: I0127 13:52:21.672359 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r7tdj/must-gather-ztp6m" event={"ID":"e6aee19f-6886-40ca-a074-299f4806ca27","Type":"ContainerDied","Data":"d54951db120c28c630914acfc63efe21e1153d2fad606b95b39d6d74c39a5e1c"} Jan 27 13:52:21 crc kubenswrapper[4786]: I0127 13:52:21.673370 4786 scope.go:117] "RemoveContainer" containerID="d54951db120c28c630914acfc63efe21e1153d2fad606b95b39d6d74c39a5e1c" Jan 27 13:52:22 crc kubenswrapper[4786]: I0127 13:52:22.603297 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-r7tdj_must-gather-ztp6m_e6aee19f-6886-40ca-a074-299f4806ca27/gather/0.log" Jan 27 13:52:30 crc kubenswrapper[4786]: I0127 13:52:30.529844 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-r7tdj/must-gather-ztp6m"] Jan 27 13:52:30 crc kubenswrapper[4786]: I0127 13:52:30.530685 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-r7tdj/must-gather-ztp6m" podUID="e6aee19f-6886-40ca-a074-299f4806ca27" containerName="copy" containerID="cri-o://a7e485d3183b47d77b68662b159d77bd082e3948b3366e2209154e64e93673cd" gracePeriod=2 Jan 27 13:52:30 crc kubenswrapper[4786]: I0127 13:52:30.537025 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-r7tdj/must-gather-ztp6m"] Jan 27 13:52:30 crc kubenswrapper[4786]: I0127 13:52:30.756361 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-r7tdj_must-gather-ztp6m_e6aee19f-6886-40ca-a074-299f4806ca27/copy/0.log" Jan 27 13:52:30 crc kubenswrapper[4786]: I0127 13:52:30.756897 4786 generic.go:334] "Generic (PLEG): container finished" podID="e6aee19f-6886-40ca-a074-299f4806ca27" containerID="a7e485d3183b47d77b68662b159d77bd082e3948b3366e2209154e64e93673cd" exitCode=143 Jan 27 13:52:30 crc kubenswrapper[4786]: I0127 13:52:30.983292 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-r7tdj_must-gather-ztp6m_e6aee19f-6886-40ca-a074-299f4806ca27/copy/0.log" Jan 27 13:52:30 crc kubenswrapper[4786]: I0127 13:52:30.983692 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r7tdj/must-gather-ztp6m" Jan 27 13:52:31 crc kubenswrapper[4786]: I0127 13:52:31.125558 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6aee19f-6886-40ca-a074-299f4806ca27-must-gather-output\") pod \"e6aee19f-6886-40ca-a074-299f4806ca27\" (UID: \"e6aee19f-6886-40ca-a074-299f4806ca27\") " Jan 27 13:52:31 crc kubenswrapper[4786]: I0127 13:52:31.125671 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-664kx\" (UniqueName: \"kubernetes.io/projected/e6aee19f-6886-40ca-a074-299f4806ca27-kube-api-access-664kx\") pod \"e6aee19f-6886-40ca-a074-299f4806ca27\" (UID: \"e6aee19f-6886-40ca-a074-299f4806ca27\") " Jan 27 13:52:31 crc kubenswrapper[4786]: I0127 13:52:31.131408 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6aee19f-6886-40ca-a074-299f4806ca27-kube-api-access-664kx" (OuterVolumeSpecName: "kube-api-access-664kx") pod "e6aee19f-6886-40ca-a074-299f4806ca27" (UID: "e6aee19f-6886-40ca-a074-299f4806ca27"). InnerVolumeSpecName "kube-api-access-664kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:52:31 crc kubenswrapper[4786]: I0127 13:52:31.227229 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-664kx\" (UniqueName: \"kubernetes.io/projected/e6aee19f-6886-40ca-a074-299f4806ca27-kube-api-access-664kx\") on node \"crc\" DevicePath \"\"" Jan 27 13:52:31 crc kubenswrapper[4786]: I0127 13:52:31.234326 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6aee19f-6886-40ca-a074-299f4806ca27-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e6aee19f-6886-40ca-a074-299f4806ca27" (UID: "e6aee19f-6886-40ca-a074-299f4806ca27"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:52:31 crc kubenswrapper[4786]: I0127 13:52:31.328725 4786 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6aee19f-6886-40ca-a074-299f4806ca27-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 13:52:31 crc kubenswrapper[4786]: I0127 13:52:31.475389 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6aee19f-6886-40ca-a074-299f4806ca27" path="/var/lib/kubelet/pods/e6aee19f-6886-40ca-a074-299f4806ca27/volumes" Jan 27 13:52:31 crc kubenswrapper[4786]: I0127 13:52:31.765102 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-r7tdj_must-gather-ztp6m_e6aee19f-6886-40ca-a074-299f4806ca27/copy/0.log" Jan 27 13:52:31 crc kubenswrapper[4786]: I0127 13:52:31.765490 4786 scope.go:117] "RemoveContainer" containerID="a7e485d3183b47d77b68662b159d77bd082e3948b3366e2209154e64e93673cd" Jan 27 13:52:31 crc kubenswrapper[4786]: I0127 13:52:31.765521 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r7tdj/must-gather-ztp6m" Jan 27 13:52:31 crc kubenswrapper[4786]: I0127 13:52:31.783364 4786 scope.go:117] "RemoveContainer" containerID="d54951db120c28c630914acfc63efe21e1153d2fad606b95b39d6d74c39a5e1c" Jan 27 13:52:39 crc kubenswrapper[4786]: I0127 13:52:39.532360 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:52:39 crc kubenswrapper[4786]: I0127 13:52:39.532993 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:53:09 crc kubenswrapper[4786]: I0127 13:53:09.533114 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:53:09 crc kubenswrapper[4786]: I0127 13:53:09.533756 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:53:30 crc kubenswrapper[4786]: I0127 13:53:30.763067 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9cvzn"] Jan 27 13:53:30 crc kubenswrapper[4786]: E0127 13:53:30.765078 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558d0d6f-49ea-47d7-b4f1-c8aa2c439823" containerName="extract-content" Jan 27 13:53:30 crc kubenswrapper[4786]: I0127 13:53:30.765098 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="558d0d6f-49ea-47d7-b4f1-c8aa2c439823" containerName="extract-content" Jan 27 13:53:30 crc kubenswrapper[4786]: E0127 13:53:30.765117 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6aee19f-6886-40ca-a074-299f4806ca27" containerName="gather" Jan 27 13:53:30 crc kubenswrapper[4786]: I0127 13:53:30.765126 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6aee19f-6886-40ca-a074-299f4806ca27" containerName="gather" Jan 27 13:53:30 crc kubenswrapper[4786]: E0127 13:53:30.765145 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558d0d6f-49ea-47d7-b4f1-c8aa2c439823" containerName="extract-utilities" Jan 27 13:53:30 crc kubenswrapper[4786]: I0127 13:53:30.765153 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="558d0d6f-49ea-47d7-b4f1-c8aa2c439823" containerName="extract-utilities" Jan 27 13:53:30 crc kubenswrapper[4786]: E0127 13:53:30.765170 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6aee19f-6886-40ca-a074-299f4806ca27" containerName="copy" Jan 27 13:53:30 crc kubenswrapper[4786]: I0127 13:53:30.765176 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6aee19f-6886-40ca-a074-299f4806ca27" containerName="copy" Jan 27 13:53:30 crc kubenswrapper[4786]: E0127 13:53:30.765198 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558d0d6f-49ea-47d7-b4f1-c8aa2c439823" containerName="registry-server" Jan 27 13:53:30 crc kubenswrapper[4786]: I0127 13:53:30.765205 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="558d0d6f-49ea-47d7-b4f1-c8aa2c439823" containerName="registry-server" Jan 27 13:53:30 crc kubenswrapper[4786]: I0127 13:53:30.765414 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6aee19f-6886-40ca-a074-299f4806ca27" containerName="copy" Jan 27 13:53:30 crc kubenswrapper[4786]: I0127 13:53:30.765435 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6aee19f-6886-40ca-a074-299f4806ca27" containerName="gather" Jan 27 13:53:30 crc kubenswrapper[4786]: I0127 13:53:30.765449 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="558d0d6f-49ea-47d7-b4f1-c8aa2c439823" containerName="registry-server" Jan 27 13:53:30 crc kubenswrapper[4786]: I0127 13:53:30.772169 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:30 crc kubenswrapper[4786]: I0127 13:53:30.782254 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cvzn"] Jan 27 13:53:30 crc kubenswrapper[4786]: I0127 13:53:30.910048 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7678e441-7068-49fa-888e-e5905ac7aa52-utilities\") pod \"redhat-marketplace-9cvzn\" (UID: \"7678e441-7068-49fa-888e-e5905ac7aa52\") " pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:30 crc kubenswrapper[4786]: I0127 13:53:30.910098 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7678e441-7068-49fa-888e-e5905ac7aa52-catalog-content\") pod \"redhat-marketplace-9cvzn\" (UID: \"7678e441-7068-49fa-888e-e5905ac7aa52\") " pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:30 crc kubenswrapper[4786]: I0127 13:53:30.910227 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqlcn\" (UniqueName: \"kubernetes.io/projected/7678e441-7068-49fa-888e-e5905ac7aa52-kube-api-access-cqlcn\") pod \"redhat-marketplace-9cvzn\" (UID: \"7678e441-7068-49fa-888e-e5905ac7aa52\") " pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:31 crc kubenswrapper[4786]: I0127 13:53:31.011869 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqlcn\" (UniqueName: \"kubernetes.io/projected/7678e441-7068-49fa-888e-e5905ac7aa52-kube-api-access-cqlcn\") pod \"redhat-marketplace-9cvzn\" (UID: \"7678e441-7068-49fa-888e-e5905ac7aa52\") " pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:31 crc kubenswrapper[4786]: I0127 13:53:31.012587 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7678e441-7068-49fa-888e-e5905ac7aa52-catalog-content\") pod \"redhat-marketplace-9cvzn\" (UID: \"7678e441-7068-49fa-888e-e5905ac7aa52\") " pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:31 crc kubenswrapper[4786]: I0127 13:53:31.012741 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7678e441-7068-49fa-888e-e5905ac7aa52-utilities\") pod \"redhat-marketplace-9cvzn\" (UID: \"7678e441-7068-49fa-888e-e5905ac7aa52\") " pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:31 crc kubenswrapper[4786]: I0127 13:53:31.013084 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7678e441-7068-49fa-888e-e5905ac7aa52-catalog-content\") pod \"redhat-marketplace-9cvzn\" (UID: \"7678e441-7068-49fa-888e-e5905ac7aa52\") " pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:31 crc kubenswrapper[4786]: I0127 13:53:31.013129 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7678e441-7068-49fa-888e-e5905ac7aa52-utilities\") pod \"redhat-marketplace-9cvzn\" (UID: \"7678e441-7068-49fa-888e-e5905ac7aa52\") " pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:31 crc kubenswrapper[4786]: I0127 13:53:31.037314 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqlcn\" (UniqueName: \"kubernetes.io/projected/7678e441-7068-49fa-888e-e5905ac7aa52-kube-api-access-cqlcn\") pod \"redhat-marketplace-9cvzn\" (UID: \"7678e441-7068-49fa-888e-e5905ac7aa52\") " pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:31 crc kubenswrapper[4786]: I0127 13:53:31.093102 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:31 crc kubenswrapper[4786]: I0127 13:53:31.521340 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cvzn"] Jan 27 13:53:31 crc kubenswrapper[4786]: W0127 13:53:31.525490 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7678e441_7068_49fa_888e_e5905ac7aa52.slice/crio-ca8f4f8d4c80851bf6d931f85083292da1cd971eb5a7b3415b3ab902f3dc5720 WatchSource:0}: Error finding container ca8f4f8d4c80851bf6d931f85083292da1cd971eb5a7b3415b3ab902f3dc5720: Status 404 returned error can't find the container with id ca8f4f8d4c80851bf6d931f85083292da1cd971eb5a7b3415b3ab902f3dc5720 Jan 27 13:53:32 crc kubenswrapper[4786]: I0127 13:53:32.276703 4786 generic.go:334] "Generic (PLEG): container finished" podID="7678e441-7068-49fa-888e-e5905ac7aa52" containerID="646c88730ae3e0d204828209530eb80b54a0702744d344f7a3798e07308faac1" exitCode=0 Jan 27 13:53:32 crc kubenswrapper[4786]: I0127 13:53:32.276931 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cvzn" event={"ID":"7678e441-7068-49fa-888e-e5905ac7aa52","Type":"ContainerDied","Data":"646c88730ae3e0d204828209530eb80b54a0702744d344f7a3798e07308faac1"} Jan 27 13:53:32 crc kubenswrapper[4786]: I0127 13:53:32.276956 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cvzn" event={"ID":"7678e441-7068-49fa-888e-e5905ac7aa52","Type":"ContainerStarted","Data":"ca8f4f8d4c80851bf6d931f85083292da1cd971eb5a7b3415b3ab902f3dc5720"} Jan 27 13:53:33 crc kubenswrapper[4786]: I0127 13:53:33.289298 4786 generic.go:334] "Generic (PLEG): container finished" podID="7678e441-7068-49fa-888e-e5905ac7aa52" containerID="69bd2e0b1556e999af30e731364725e5d52bd098285dc05c5a20fa4bd512d6d6" exitCode=0 Jan 27 13:53:33 crc kubenswrapper[4786]: I0127 13:53:33.289352 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cvzn" event={"ID":"7678e441-7068-49fa-888e-e5905ac7aa52","Type":"ContainerDied","Data":"69bd2e0b1556e999af30e731364725e5d52bd098285dc05c5a20fa4bd512d6d6"} Jan 27 13:53:34 crc kubenswrapper[4786]: I0127 13:53:34.298149 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cvzn" event={"ID":"7678e441-7068-49fa-888e-e5905ac7aa52","Type":"ContainerStarted","Data":"7b1bc1546150d9c3ca978f0a6e6aeff2483cb2217d93cb58c8538fb47162f319"} Jan 27 13:53:34 crc kubenswrapper[4786]: I0127 13:53:34.321586 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9cvzn" podStartSLOduration=2.908389929 podStartE2EDuration="4.321568328s" podCreationTimestamp="2026-01-27 13:53:30 +0000 UTC" firstStartedPulling="2026-01-27 13:53:32.278817516 +0000 UTC m=+2795.489431635" lastFinishedPulling="2026-01-27 13:53:33.691995915 +0000 UTC m=+2796.902610034" observedRunningTime="2026-01-27 13:53:34.319209374 +0000 UTC m=+2797.529823513" watchObservedRunningTime="2026-01-27 13:53:34.321568328 +0000 UTC m=+2797.532182447" Jan 27 13:53:39 crc kubenswrapper[4786]: I0127 13:53:39.532415 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:53:39 crc kubenswrapper[4786]: I0127 13:53:39.532910 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 13:53:39 crc kubenswrapper[4786]: I0127 13:53:39.532954 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" Jan 27 13:53:39 crc kubenswrapper[4786]: I0127 13:53:39.533475 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"01dc83e541cd8f5738a00fe3145a7db1bf1be4f5116690254fc7c5d1c4fc7db1"} pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 13:53:39 crc kubenswrapper[4786]: I0127 13:53:39.533522 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" containerID="cri-o://01dc83e541cd8f5738a00fe3145a7db1bf1be4f5116690254fc7c5d1c4fc7db1" gracePeriod=600 Jan 27 13:53:40 crc kubenswrapper[4786]: I0127 13:53:40.345803 4786 generic.go:334] "Generic (PLEG): container finished" podID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerID="01dc83e541cd8f5738a00fe3145a7db1bf1be4f5116690254fc7c5d1c4fc7db1" exitCode=0 Jan 27 13:53:40 crc kubenswrapper[4786]: I0127 13:53:40.345897 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerDied","Data":"01dc83e541cd8f5738a00fe3145a7db1bf1be4f5116690254fc7c5d1c4fc7db1"} Jan 27 13:53:40 crc kubenswrapper[4786]: I0127 13:53:40.346400 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" event={"ID":"2c6a2646-52f7-41be-8a81-3fed6eac75cc","Type":"ContainerStarted","Data":"6b765996b441dc539e87036e1725775ebefe591e0e8e2d4bcedafae00057050d"} Jan 27 13:53:40 crc kubenswrapper[4786]: I0127 13:53:40.346424 4786 scope.go:117] "RemoveContainer" containerID="9aee1b60d3250c89b1c0c42e1a4673943c8599c36c572d4bce7037dcd4019a1d" Jan 27 13:53:41 crc kubenswrapper[4786]: I0127 13:53:41.093518 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:41 crc kubenswrapper[4786]: I0127 13:53:41.093574 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:41 crc kubenswrapper[4786]: I0127 13:53:41.136781 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:41 crc kubenswrapper[4786]: I0127 13:53:41.405536 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:41 crc kubenswrapper[4786]: I0127 13:53:41.453190 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cvzn"] Jan 27 13:53:43 crc kubenswrapper[4786]: I0127 13:53:43.371830 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9cvzn" podUID="7678e441-7068-49fa-888e-e5905ac7aa52" containerName="registry-server" containerID="cri-o://7b1bc1546150d9c3ca978f0a6e6aeff2483cb2217d93cb58c8538fb47162f319" gracePeriod=2 Jan 27 13:53:43 crc kubenswrapper[4786]: I0127 13:53:43.829150 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:43 crc kubenswrapper[4786]: I0127 13:53:43.923519 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqlcn\" (UniqueName: \"kubernetes.io/projected/7678e441-7068-49fa-888e-e5905ac7aa52-kube-api-access-cqlcn\") pod \"7678e441-7068-49fa-888e-e5905ac7aa52\" (UID: \"7678e441-7068-49fa-888e-e5905ac7aa52\") " Jan 27 13:53:43 crc kubenswrapper[4786]: I0127 13:53:43.923571 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7678e441-7068-49fa-888e-e5905ac7aa52-utilities\") pod \"7678e441-7068-49fa-888e-e5905ac7aa52\" (UID: \"7678e441-7068-49fa-888e-e5905ac7aa52\") " Jan 27 13:53:43 crc kubenswrapper[4786]: I0127 13:53:43.923766 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7678e441-7068-49fa-888e-e5905ac7aa52-catalog-content\") pod \"7678e441-7068-49fa-888e-e5905ac7aa52\" (UID: \"7678e441-7068-49fa-888e-e5905ac7aa52\") " Jan 27 13:53:43 crc kubenswrapper[4786]: I0127 13:53:43.926220 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7678e441-7068-49fa-888e-e5905ac7aa52-utilities" (OuterVolumeSpecName: "utilities") pod "7678e441-7068-49fa-888e-e5905ac7aa52" (UID: "7678e441-7068-49fa-888e-e5905ac7aa52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:53:43 crc kubenswrapper[4786]: I0127 13:53:43.934412 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7678e441-7068-49fa-888e-e5905ac7aa52-kube-api-access-cqlcn" (OuterVolumeSpecName: "kube-api-access-cqlcn") pod "7678e441-7068-49fa-888e-e5905ac7aa52" (UID: "7678e441-7068-49fa-888e-e5905ac7aa52"). InnerVolumeSpecName "kube-api-access-cqlcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 13:53:43 crc kubenswrapper[4786]: I0127 13:53:43.950738 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7678e441-7068-49fa-888e-e5905ac7aa52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7678e441-7068-49fa-888e-e5905ac7aa52" (UID: "7678e441-7068-49fa-888e-e5905ac7aa52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.026366 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7678e441-7068-49fa-888e-e5905ac7aa52-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.026883 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqlcn\" (UniqueName: \"kubernetes.io/projected/7678e441-7068-49fa-888e-e5905ac7aa52-kube-api-access-cqlcn\") on node \"crc\" DevicePath \"\"" Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.027025 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7678e441-7068-49fa-888e-e5905ac7aa52-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.382688 4786 generic.go:334] "Generic (PLEG): container finished" podID="7678e441-7068-49fa-888e-e5905ac7aa52" containerID="7b1bc1546150d9c3ca978f0a6e6aeff2483cb2217d93cb58c8538fb47162f319" exitCode=0 Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.382739 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cvzn" event={"ID":"7678e441-7068-49fa-888e-e5905ac7aa52","Type":"ContainerDied","Data":"7b1bc1546150d9c3ca978f0a6e6aeff2483cb2217d93cb58c8538fb47162f319"} Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.382768 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cvzn" event={"ID":"7678e441-7068-49fa-888e-e5905ac7aa52","Type":"ContainerDied","Data":"ca8f4f8d4c80851bf6d931f85083292da1cd971eb5a7b3415b3ab902f3dc5720"} Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.382788 4786 scope.go:117] "RemoveContainer" containerID="7b1bc1546150d9c3ca978f0a6e6aeff2483cb2217d93cb58c8538fb47162f319" Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.382857 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9cvzn" Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.429302 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cvzn"] Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.430827 4786 scope.go:117] "RemoveContainer" containerID="69bd2e0b1556e999af30e731364725e5d52bd098285dc05c5a20fa4bd512d6d6" Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.435049 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cvzn"] Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.451753 4786 scope.go:117] "RemoveContainer" containerID="646c88730ae3e0d204828209530eb80b54a0702744d344f7a3798e07308faac1" Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.483777 4786 scope.go:117] "RemoveContainer" containerID="7b1bc1546150d9c3ca978f0a6e6aeff2483cb2217d93cb58c8538fb47162f319" Jan 27 13:53:44 crc kubenswrapper[4786]: E0127 13:53:44.484579 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b1bc1546150d9c3ca978f0a6e6aeff2483cb2217d93cb58c8538fb47162f319\": container with ID starting with 7b1bc1546150d9c3ca978f0a6e6aeff2483cb2217d93cb58c8538fb47162f319 not found: ID does not exist" containerID="7b1bc1546150d9c3ca978f0a6e6aeff2483cb2217d93cb58c8538fb47162f319" Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.484672 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b1bc1546150d9c3ca978f0a6e6aeff2483cb2217d93cb58c8538fb47162f319"} err="failed to get container status \"7b1bc1546150d9c3ca978f0a6e6aeff2483cb2217d93cb58c8538fb47162f319\": rpc error: code = NotFound desc = could not find container \"7b1bc1546150d9c3ca978f0a6e6aeff2483cb2217d93cb58c8538fb47162f319\": container with ID starting with 7b1bc1546150d9c3ca978f0a6e6aeff2483cb2217d93cb58c8538fb47162f319 not found: ID does not exist" Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.484720 4786 scope.go:117] "RemoveContainer" containerID="69bd2e0b1556e999af30e731364725e5d52bd098285dc05c5a20fa4bd512d6d6" Jan 27 13:53:44 crc kubenswrapper[4786]: E0127 13:53:44.485081 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69bd2e0b1556e999af30e731364725e5d52bd098285dc05c5a20fa4bd512d6d6\": container with ID starting with 69bd2e0b1556e999af30e731364725e5d52bd098285dc05c5a20fa4bd512d6d6 not found: ID does not exist" containerID="69bd2e0b1556e999af30e731364725e5d52bd098285dc05c5a20fa4bd512d6d6" Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.485133 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69bd2e0b1556e999af30e731364725e5d52bd098285dc05c5a20fa4bd512d6d6"} err="failed to get container status \"69bd2e0b1556e999af30e731364725e5d52bd098285dc05c5a20fa4bd512d6d6\": rpc error: code = NotFound desc = could not find container \"69bd2e0b1556e999af30e731364725e5d52bd098285dc05c5a20fa4bd512d6d6\": container with ID starting with 69bd2e0b1556e999af30e731364725e5d52bd098285dc05c5a20fa4bd512d6d6 not found: ID does not exist" Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.485160 4786 scope.go:117] "RemoveContainer" containerID="646c88730ae3e0d204828209530eb80b54a0702744d344f7a3798e07308faac1" Jan 27 13:53:44 crc kubenswrapper[4786]: E0127 13:53:44.485557 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"646c88730ae3e0d204828209530eb80b54a0702744d344f7a3798e07308faac1\": container with ID starting with 646c88730ae3e0d204828209530eb80b54a0702744d344f7a3798e07308faac1 not found: ID does not exist" containerID="646c88730ae3e0d204828209530eb80b54a0702744d344f7a3798e07308faac1" Jan 27 13:53:44 crc kubenswrapper[4786]: I0127 13:53:44.485622 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646c88730ae3e0d204828209530eb80b54a0702744d344f7a3798e07308faac1"} err="failed to get container status \"646c88730ae3e0d204828209530eb80b54a0702744d344f7a3798e07308faac1\": rpc error: code = NotFound desc = could not find container \"646c88730ae3e0d204828209530eb80b54a0702744d344f7a3798e07308faac1\": container with ID starting with 646c88730ae3e0d204828209530eb80b54a0702744d344f7a3798e07308faac1 not found: ID does not exist" Jan 27 13:53:45 crc kubenswrapper[4786]: I0127 13:53:45.477453 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7678e441-7068-49fa-888e-e5905ac7aa52" path="/var/lib/kubelet/pods/7678e441-7068-49fa-888e-e5905ac7aa52/volumes" Jan 27 13:55:39 crc kubenswrapper[4786]: I0127 13:55:39.533295 4786 patch_prober.go:28] interesting pod/machine-config-daemon-7bxtk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 13:55:39 crc kubenswrapper[4786]: I0127 13:55:39.533933 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7bxtk" podUID="2c6a2646-52f7-41be-8a81-3fed6eac75cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"